MPSoC and ‘The Vision Thing’

By Ian Mackintosh |  No Comments  |  Posted: September 1, 2007
Topics/Categories: EDA - ESL  |  Tags:  | Organizations:

We have entered the era of the multi-processor system-on-chip (MPSoC) but it remains a major frustration that, for a technology that is so imminent and so necessary, there is as yet no real vision out there that goes beyond the parochial. Yes, ‘point’ issues are also being addressed, but we need to define the concept, the vision, and the overview on a much broader basis.

For example, the well-attended Corezilla panel at this year’s DAC was very representative in gathering together the stakeholders in the multi-processor and MPSoC worlds, but what it generally served to illustrate was a separation between the interests and the technologies required to create a uniform methodology that addresses the challenge before us.

This segmentation has existed for two decades in the design space and passed over into the SoC space. Now it has spread to, and is increasing the pain in the software area, particularly as projects add armies of engineers to solve the embedded software problem. So how do we define this separation, in the hope that we might find a way of addressing it? Let’s consider where the main players stand. For their part, hardware providers seem to be suffering from a worrying degree of myopia, in that they look likely to push forward into MPSoC from the perspectives of legacy solutions and the protection of existing investments. Software providers are poised to do exactly the same.

Elsewhere, the multi-processing and high-performance computing providers are, arguably, culturally unsuited to making contributions to the MPSoC space. This is in spite of the fact they have valuable knowledge and experience, and a real understanding of some very relevant ‘dos and don’ts’. Their problem is that MPSoC is a divergent technology, one that is removed from their traditional core competences in that high-performance computing tends to focus on discrete implementations.

Finally, there is academia. Here again, tradition suggests a focus on point solutions, issues and research. The results may have some overspill value at the critical methodological level, but most likely only on an incremental basis.

Given this landscape, where does the real burden fall and how can we make the necessary advances in realizing an MPSoC vision?

Progress today is dictated by increasing transistor count, the cost-effectiveness of silicon availability, and an insatiable need for product advancement that is being driven by convergence and increased functionality. In the specific case of MPSoC, a further important influence is consumer products. History suggests that any solutions will probably have to come from the efforts of savvy system architects working in conjunction with design methodology experts – not one or the other, but both. In short, the end-user community is on the line here and so will have to provide the answers.

Moreover, technology boundaries will need to be crossed frequently so that we fully understand the system design and software challenges and, from that foundation, the minimum sets of tools people will require to make these increasingly complex MPSoCs capable of being implemented in a practical way.

So, can we expect to see rapid and fundamental advances and leaps in the availability of tools that will have a dramatic impact on the development process? Probably not, and there are a couple of reasons for that. One, as noted earlier, is the protectionist position that is being taken by players who feel a need to defend an existing space. However, the second comes down to the sheer number of variables involved in MPSoC development.

Solutions are by their nature generic and typically are based upon specific cases with a few manageable variables, but the process of defining an MPSoC is based on many variables. How many processors are involved? What types of processor are they? What type of software is being run? Which processors is it being run on? Who’s doing the programming and, again, on which processors? What sort of skill level do the programmers have? And what level of efficiency must they reach so that they are effective in delivering the appropriate software?

These variables mean that you cannot have one set of tools to solve everything. Nobody is going to come up with three new tools, a methodology and suddenly make everything hunky dory. A diverse set of parameters means that you are more likely to see a series of incremental steps forward – and then most likely on different fronts within general MPSoC design at different times. This is about evolution not revolution.

 

Figure

Cell – Sony, IBM and Toshiba’s poster child for MPSoC

So what can we expect? In the short term, platforms will continue to get larger and larger, and the challenge will be addressed by throwing more and more bodies at the development process. The market will work with point solutions that are either customized or created on a project-by-project basis to ease the burden of programming and deliver increasingly complex and converged functionality. Thus, MPSoC will initially be developed in a patchwork fashion, also inevitably leveraging the available base of legacy technology. Given the time and cost constraints we all face, people will just keep stitching different existing pieces together.

These patchwork solutions, though, are likely to spin out for later generations. Random tools and point solutions will be productized where they can help to unify the design work, the design teams and the system development. That, though, is a longer term issue. Nevertheless, there are a couple of parallel moves that could help now. One is a re-architecting of the basic processor approaches and of strategies for software development productivity, working beyond the leveraging of legacy technology.

A second one is the creation of standards that really empower developers by making both the hardware and software technology more readily interoperable and re-usable. We need to start from a new premise that standards work should begin sooner rather than later, and that the standards development has itself to be completed quickly. We cannot talk in terms of standards that will take five years to develop.

Another requirement is that once standards are developed, they have to be shared enthusiastically. Indeed, it is sometimes the case that while a standard itself has a limited value, the process of collaboration that accompanies it has a huge value to everybody involved.

I came to the conclusion that we need constructive industry standards activity before I came to the issue of where my organization, as OCP-IP, fits into the drive towards MPSoC. Nevertheless, OCP-IP is already engaged in work that addresses three key areas of the challenges before the industry.

Those of you who already follow our activities will be familiar with schemes addressing the future of debug and network-onchip. These programs are progressing nicely and and we have large, energized teams pushing forward on all fronts in those two areas.

However, we are now also addressing the area of cache coherence. Here, we are not restricting our focus to embedded processors, but looking across the entire processor space: DSP, graphics engine and so on. We have snooping schemes that have already been defined and detailed evaluation and applications work will continue over the course of the year. We are also investigating other issues in the area of directory-based schemes with validation and analysis activities.

Yet the problem remains. Nobody as yet owns a technology or strategy that runs right across the MPSoC space. Many players are only in one part of one segment, and just how inevitably fragmented this makes the space is self-evident if you take a brief overview of what it contains: SoC architecture, SoC design, SoC implementation, embedded software development, embedded component development, tools that help manage the design methodology and tools that help develop embedded software, to name but a few. But there are no tools that help the overall process – there is nothing on the horizon that seems likely to satisfy this need.

At the same time – and again with a considerable degree of inevitability – the problem is exploding and disaggregating. Once upon a time, the designer would sit down, architect the design, go do the logic and circuit design, bring it all together and hand it over to a layout guy, and then on you went to manufacture. That activity, from what was once just the designer alone, is now split into architects, implementation teams, logic designers, circuit designers, foundation IP providers, core IP providers and so on. It is located in about 20 compartments of personnel already and that is just for the silicon. As for the software, that really is being addressed with burgeoning armies of people.

The methodology to create a unified system that all these people can leverage at the right place and at the right price (so that the long pole in the tent is the creation of silicon) just does not exist. That, however, does not prevent us from seeking to take the wisest steps forward until more ideal approaches ultimately emerge.

Comments are closed.

PLATINUM SPONSORS

Synopsys Cadence Design Systems Siemens EDA
View All Sponsors