Archive for the ‘central processing unit’ tag
ARM, the British multinational semiconductor and software design firm, is announcing a graphics technology that could power a new generation of portable devices like tablets and smartphones with visuals that could rival game consoles such as Sony’s PlayStation 3 and Microsoft’s Xbox 360.
The technology is ARM’s new series of Arm Mali T600 designs, with anywhere from one to eight graphics computing cores, or brains. ARM develops the underlying architecture and the “cores,” or the basic blueprints of a chip, which its licensees fashion into chips for mobile devices. The new T600 designs will provide ammunition for scores of manufacturers to build ground-breaking portable machines.
The company’s hallmark is creating power-efficient chips that are used in billions of devices each year, making them ideal in smartphones or tablets where battery life is a premium.
The ARM Mali family is a graphics processing unit (GPU) that displays the visuals of devices such as smart TVs, smartphones, and tablets. The GPU can be used in a machine with an ARM central processing unit (CPU) and Khronos Group graphics processing software such as Open GL ES 3.0, which is also debuting today.
“You’ll see this result in faster devices with more fluid user interfaces,” said Steve Steele, senior product manager at ARM, in an interview with GamesBeat. “It can handle graphics with increasing complexity and resolution.”
Long a leader in CPU cores, ARM began creating its Mali graphics cores in 2007. ARM Mali T200 products are shipping now in about 70 percent of digital TVs, 20 percent of Android smartphones, and 50 percent of Android tablets. About 160 manufacturer devices are shipping. About 12 partners shipped 48 million units in 2011, and 25 are expected to ship more than 100 million in 2012.
The previous Mali T400 series was unveiled in November 2010, and products based on it will be coming out in the second half of 2012. Devices based on the T600 series (T624, T628, and T678) are likely to be out next year.
ARM has to keep a steady cadence of designs coming for its licensees to keep up with the rapid pace of mobile innovation. The Mali T600 designs will be able to handle graphics with resolution of 4K by 2K, or far more pixel density than today’s high-definition TVs. They will also have adaptive, scalable texture compression and a 50 percent increase in performance in the same given area as the prior chips.
ARM has a total of 63 Mali licensees and 51 company partners. It has 34 licensees for the Mali T400 and eight for the Mali T600.
The ARM technology competes against rivals designed by Nvidia, which makes the Tegra series of chips, and Imagination Technologies.
The tiny $35 Raspberry Pi set off a surge of demand for tiny Linux computers earlier this year. But a Korean hardware manufacturer called Hardkernel is launching a high-end computer board that measures just 3.5 inches by 3.7 inches. The new board could be another favorite among hobbyists computer users.
The new Odroid-X board has a Samsung Exynos quad-core ARM-based central processing unit that runs at 1.4 gigahertz. It has a quad-core Mali ARM-based graphics processing unit, 1 gigabyte of random access memory, six universal serial bus ports, an Ethernet adapter, headphone and microphone jacks and an SDHC memory card slot for storage.
It has four times more RAM than Raspberry Pi. The Hardkernel board uses the Cortex-A9 core from ARM, based on the ARM7 architecture. It can run the latest version of Ubuntu Linux as well as the Android operating system.
Filed under: VentureBeat
Nvidia chief executive Jen-Hsun Huang told analysts today just how important cloud graphics will be to the world. The company announced last week that its recently announced Kepler-based graphics chips are capable of cloud graphics. That means they can process graphics for multiple users in a data center and then dispatch the appropriate graphics as needed to the displays of distant remote users.
That allows big new applications for cloud computing in the enterprise. You can, for instance, use your own puny laptop to access huge visual projects such as engineering designs. Your computer will tap the graphics computing power in the cloud to render the images that your laptop could never display in real-time. One graphics processing unit (GPU) in the cloud can supply the graphics for at least four remote users today, compared to just one for prior chips. That makes cloud gaming and enterprise cloud graphics applications far more economical than in the past.
“Now we have a GPU for the cloud, a virtual GPU,” Huang said at Nvidia’s analyst day in Santa Clara, Calif. “What that means is that a whole bunch of users can see one GPU and use it as if it were their own.”
Nvidia has created a lot of software that enables the virtualized GPU, which can take graphics processing commands from a variety of users and process them on the GPU without regard for where those commands are coming from. No longer do the processing tasks have to come from just one computer.
This allows the graphics chip to catch up to the central processing unit, or CPU, which, when combined with hypervisor software originally created by IBM for sharing mainframe computers among many users, can serve many users from a single chip in a data center.
Huang said this cloud GPU will be valuable to the world’s 25 million product designers around the world. These users now buy Quadro-based graphics workstations in the millions each year. Power graphics users then make use of those designs and then share them. Other enterprise workers have to access high-end graphics, but in the enterprise, many of them don’t have beefy computers. Virtualization gets around that problem, so you can run high-end graphics applications on a device as simple as an iPad. Hundreds of millions of enterprise users could benefit from cloud graphics, Huang said.
“That is completely empowering,” Huang said.
The Nvidia technology can handle virtualization quickly. It doesn’t use a software layer, or emulation, because that approach is too slow. The hardware has been tweaked for virtual GPU tasks.
“We wanted to put this in the cloud and for the first time, a GPU needs to be aware that a display may not be directly connected to the GPU,” he said. “It could be anywhere in the world. You just tell the GPU an address, and the bits coming out of the chip are sent to that address” to display the graphics on a distant computer.
Huang demonstrated an iPad running Windows and other demanding PC graphics applications running on the cloud. The iPad was able to show a high-end design program displaying a photorealistic image of a car. A total of 1,536 Kepler cores were accessible to the iPad, so it could handle the computing task easily.
As far games, Nvidia wants to use cloud graphics to make games instantaneously convenient to play as TV shows. This taps a technology dubbed the GeForce Grid. You no longer have to download a big game for hours and have a powerful computer to play it. That’s similar to the vision that OnLive and Gaikai have, and both are partners with Nvidia in cloud graphics.
“This will do the same for video games what TV has done for video,” Huang said. “We think it could expand the market.”
Of course, high-end games that are played remotely could suffer from lag, where the inputs that a gamer makes aren’t immediately translated into actions in the game. So serious gamers are playing online games on PCs, rather than consoles, since the delays on a high-end PC are about 65 milliseconds and they are 166 milliseconds on an online console.
With the GeForce Grid powered by Gaikai, Nvidia can reduce the lag of the cloud graphics games to 161 milliseconds, or essentially just as fast as a game console. That could enable large numbers of casual gamers and non-gamers to play high-end games on relatively simple computing devices, Huang said.
Huang showed a demo of Hawken, an upcoming high-end 3D downloadable game, working in a cloud-based environment.
“For the game developer, we can expand their reach,” he said. “Today, you need to have the right hardware. But we can make it run on any hardware.”
On top of that, cloud-based games are piracy free, Huang said.
[Photo credit: Dean Takahashi]
Filed under: games
Advanced Micro Devices (AMD) is introducing a new generation of chips that combine graphics and processors on a single integrated circuit. Code-named Trinity, the new chips are supposed to stop Intel from dominating the consumer Ultrabook laptop and all-in-one desktop chip markets.
The A-series accelerated processor unit (APU) chip family is speedy, and so it goes to show that five brains are better than one. AMD also intends to show that its hybrid chips are better than Intel’s latest hybrid processor-graphics chips, code-named Ivy Bridge. The timing of both families of chips arriving at the same time is a coincidence and a happy one for consumers shopping for PCs.
The new chips have double the performance-per-watt of power consumed. They enable high-performance laptops with as much as 12 hours of battery life.
Trinity has two dual-core processors embedded in it for a total of four processing brains. It also has a high-end graphics core that is more capable than the Ivy Bridge graphics, said John Taylor, director of global product marketing at Sunnyvale, Calif.-based AMD. The central processing unit (CPU) portion of the chip is 29 percent faster than last year’s Llano-based APUs while the graphics render 56 percent faster.
“We’ve boosted the CPU and graphics architecture over last year’s Llano chips,” Taylor said. “This is the best-in-class, all-in media performance from gaming to high-definition video.”
While Llano chips had 1.178 billion transistors, or basic computing components, Trinity has 1.303 billion. The Trinity chips are slightly bigger at 246 square millimeters, compared to 228 for Llano. And they operate on about the same amount of wattage, from 35 to 100 watts. A dual-core version of the A series chip family runs on 17 watts, while a quad-core version runs on as little as 25 watts.
Like Intel’s Ivy Bridge chips, which were formally unveiled last month, the AMD chips are aimed at high-performance but thin laptops. Intel calls these laptops Ultrabooks, and it is spending hundreds of millions of dollars marketing them this year in the hopes of staving off the encroachment of smartphones and tablets on the PC market. AMD believes that laptops based on Trinity will be less expensive than the $700 to $1,000 prices that computer makers are charging for Ivy Bridge-based Ultrabooks.
“We’ve got a clear value leadership position in this ultrathin category,” Taylor said.
Trinity also represents a threat to Nvidia, which makes stand-alone graphics chips. AMD is trying to make the hybrid graphics in its APUs so good at running 3D graphics that consumers won’t see the need for the extra Nvidia chips anymore. The AMD graphics core is based on the AMD Radeon Northern Islands graphics technology. On games such as Crysis, AMD’s graphics will render 40 percent faster. Games will run 20 percent to 50 percent faster on AMD compared to Intel, Taylor said.
From May 15 through the launch of Microsoft’s Windows 8 operating system this fall, AMD believes more than 100 computers based on Trinity will debut. Trinity chips will compete head to head against Intel’s Core i7 and Core i5 microprocessors. Taylor contends that AMD chips will provide laptops with a couple of more hours of battery life in some usage scenarios. AMD’s chips will be priced below Intel’s Core i7 chips.
Microsoft’s next-generation machine is rumored to be code-named Durango. That name was partially verified on Feb. 28, when a senior technical artist at Crytek tweeted that he was “enjoying the Durango developers summit in London. So far, great swag and interesting talks.” He then deleted the tweet and his account. In meetings, Microsoft reportedly showed details of the new console hardware.
Now Xbox World magazine is reporting that development kits for the new console were shipped to developers in March. Those kits include an IBM PowerPC central processing unit (CPU) with 16 cores, compared to just three cores on the Xbox 360′s CPU. The graphics chip is equivalent to an Advanced Micro Devices Radeon HD 7000-series graphics card, and the machine has a Blu-ray optical drive. The report mentioned that Kinect 2 (the sequel to Microsoft’s motion-sensing system) will chew up as many as four cores tracking multiple players.
Nintendo got dinged by unnamed developers who said that the Wii U’s performance isn’t as good as either the Xbox 360 or the PlayStation 3. It’s a little hard to believe that a brand new Nintendo console debuting in the fall of 2012 will not be faster than the consoles that debuted in 2005 and 2006, respectively. But Nintendo will make the leap to high-definition TVs with the graphics on its new machine, and it has never put a priority on processing power in the first place.
“We do not focus on technology specs,” said a spokesperson for Nintendo of America. “We understand that people like to dissect graphics and processing power, but the experience of playing will always be more important than raw numbers.”
Nintendo isn’t confirming that the report is untrue, but it isn’t denying it either.
Sony’s PlayStation 4, code-named Orbis, got outed by Kotaku last week. The Orbis reportedly has an AMD 64-bit CPU and an AMD Southern Islands GPU (graphics processing unit). The GPU will be capable of displaying games at a resolution of 4096 x 2160. And it will play games in stereoscopic 3D at 1080p high-definition resolution. It will not be compatible with PS 3 games and will not play used games.
VG247 said that the next PlayStation will be released prior to the Durango machine from Microsoft.
We’re checking with the companies to see if they want to comment further. But don’t hold your breath.
[Image credits: Sillegamer, Kotaku and Nintendo]
Filed under: games
Unity Technologies announced today that its developers will be able to take the games they’ve created and publish them in the Flash 3D format, allowing them to reach much bigger audiences on the web than before.
Unity is beginning the open beta test for its Unity 3.5 game development engine. Since Adobe published its Flash Player 11 in October, Flash games are able to run 3D graphics and take advantage of 3D hardware in a user’s computer. That, in turn, makes it possible to run 3D games created with Unity’s 3D game engine tools and distribute them to the wider web via a Flash add-on.
For developers, this means that they will be able to get high-quality Unity-based 3D games out to a larger mass market, since Flash is present in more than 90 percent of user computers. For consumers, it could mean much better
“We are finally delivering on 3D for everyone,” said David Helgason, chief executive of San Francisco-based Unity, in an interview. “Flash is the last major platform for us, and the biggest one of all. It’s amazing. We think 3D content will burst out on the internet.”
To encourage developers to port their Unity games to Flash, Unity Technologies is holding a contest with a $20,000 prize. Helgason said that it took months of engineering work to make Unity compatible with Adobe’s Stage3D web technology, which enables Flash to make use of 3D hardware in a user’s computer. In the past, Flash was based on exploiting only the central processing unit (CPU).
Helgason said that Unity games will now be almost universally available on the web via Flash, on the consoles, on iOS and Android, and a variety of other smaller platforms. The open beta is available for download at 8 am Pacific time on Thursday, Dec. 22. The new version of Flash is also supporting Google’s native client technology for the Chrome browser.
“We believe this will make Adobe flash into a AAA games platform,” Helgason said.
While more mobile technology companies are exploring gesture recognition options, the implementation of touch-free solutions further taxes the processors of mobile devices, leading to reduced battery life. Any alternative that can extend the battery life of mobile electronics while implementing new gesture recognition software is a welcome innovation in the field.
Ceva manages a flexible hardware platform – Ceva-MM3000 - that can be installed in a variety of multimedia devices, ranging smartphones to digital televisions. The solution essentially takes some of the heavy computing work off the shoulders of the main device, significantly improving power efficiency. EyeSight’s hand-gesture recognition software can be used with any device that features a camera, including portable game consoles, enabling these electronics to provide players with an experience similar to Microsoft’s Kinect.
Under the new investment agreement, EyeSight’s touch-free solutions will be available to users of the Ceva-MM3000 platform. Given the computation-intensive requirements of gesture recognition, the partnership with EyeSight seems only natural.
For example, Korean phone maker Pantech recently integrated EyeSight’s hands-free technology into its products to make tasks such as answering the phone while driving easier – a simple wave of the hand will do the trick. The integration of the Ceva-MM3000 platform could reduce the workload tackled by the phone’s main central processing unit (CPU) and further extend its battery life, despite the complex nature of gesture recognition.
“Semiconductor vendors require higher performance, lower power and more flexible processor solutions to successfully incorporate advanced gesture technology into their product designs,” Gideon Shmuel, CEO of eyeSight, explained in a statement from the company. “Our strategic partnership with Ceva brings together our industry-leading gesture recognition technologies with the industry’s most advanced ISP and video platform to offer the Ceva-MM3000 users a powerful, software-based solution that delivers dramatically improved performance.”
With Ceva’s investment in pocket, EyeSight has now accumulated more than $4 million in investments, completing its second round of funding.
Adobe created a hole in the market when it admitted finally that it will be unable to create a version of its Flash that can run on mobile devices. Sibblingz is stepping into that hole to provide game developers with a platform that enables them to create web and iOS and Android mobile games at the same time.
“Mobile Flash blew up,” said Peter Relan, chairman of YouWeb, the incubator that created Sibblingz, a Burlingame, Calif.-based startup. “Now we have this new version of Spaceport to serve the developers who were otherwise depending on Adobe.”
Relan said in an interview that Spaceport 3.0 has been designed to be a drop-in replacement for Flash, which Adobe gave up on adapting to mobile devices. For years, Flash ran fine by tapping a computer’s central processing unit (CPU). But in the mobile era, CPUs were to under-powered to run Flash properly. If Adobe had adapted Flash to run on graphics processing units (GPUs), it would have worked fine on mobile devices. But Adobe never did that.
As a result, developers can use Spaceport to create games that run on web sites, on iOS devices, or on Android phones. Relan said the work on this began after Sibblingz shipped Spaceport 2.0 in the spring. After that, developers offered their feedback for what they wanted in the version 3.0. After that, Sibblingz created an applications programming interface (API) that mimicked the Flash API, making it a lot easier to take a Flash game and move it over to Sibblingz.
Michael Cai, an analyst at market researcher Interpret, said that game developers are looking for alternatives like Sibblingz because of Adobe’s move. Space 3.0 has a web service that automatically converts Flash animations created with Adobe tools into SWF files using Spaceport’s vector graphics. These are then rendered on iOS or Android devices by the GPU-based rendering engine.
“Adobe is a great tools company, but they failed to do this for the developer community,” Relan said.
In a side-by-side comparison, games rendered with Spaceport 3.0 run visibly faster on Android, compared to rendering via a Flash mobile plug-in. (See the video below for a comparison). Developers can use Spaceport with no upfront fees.
YouWeb has an interesting position in the mobile game market, since it has created by iSwifter and Sibblingz. iSwifter enables developers who continue to create PC flash games to get those games to run on iOS tablets via streaming technology. Sibblingz, meanwhile, lets game companies migrate from Flash to Spaceport.
Rivals include pure HTML5, which is slow and may be a couple of years away from having enough performance to run fast-moving games. Sibblingz is being used by CrowdStar and a couple of other game companies now.
The company announced its Snapdragon S4 class of mobile processors and enhanced its Snapdragon S1 chips for entry-level platforms. The S4 processors are aimed at lowering design, engineering and inventory costs while bringing the latest performance in 3G and 4G connection speeds for mobile data users. The company made the announcement at its analyst meeting in New York.
San Diego-based Qualcomm is trying to address the whole mobile market, from basic smartphones to high-end smartphones and tablets. The S4 chips are optimized for software that includes multimedia, connectivity, camera, display, security, power management, browsing, and natural user interface features. The S4 is a quad-core (four computing brains on one chip) chip built with a 28 nanometer manufacturing process.
Snapdragon chips are already in 300 smartphones and are in about 350 more that are under development. The heart of the new chips is the Krait central processing unit (CPU), which is built from the gruond up for mobile performance and power management.
The new S4 chips include the MSM8660A, MSM8260A, MSM8630, MSM8230, MSM8627, MSM8227, APQ8060A and APQ8030. Devices based on the S4 processors will appear in early 2012. The new models of the low-end S1 chip are the MSM7225A, MSM7625A, MSM7227A and MSM7627A. Those chips are aimed at 2G and 3G phones. Rivals include Texas Instruments, Marvell, Broadcom and Nvidia.
Snapdragon has more than 225 design wins with 30 different Android smartphone manufacturers.
Smartphones are expected to sell 4 billion units between 2011 and 2015, according to estimates from Gartner, Strategy Analytics and IDC. Emerging regions are expected to be 50 percent of sales by 2015.
Qualcomm shipped more than 483 million MSMs, or modem chips, during its fiscal year ended Sept. 30.
Filed under: mobile
Today marks the anniversary of the launch of the Xbox and Microsoft’s Halo video game a decade ago, as we’ve chronicled in our coverage. But those launches would never have happened without today’s other anniversary: the 40th anniversary of the launch of the original microprocessor.
Intel introduced the 4004 microprocessor on Nov. 15, 1971, billing it as a computer on a chip. It was a central processing unit, (CPU), and its progeny would become the brains of most things electronic, including PCs, game consoles, supercomputers, digital cameras, servers, smartphones, iPods, tablets automobiles, microwave ovens, and toys. It has become an indispensable part of modern life. They solve problems from displaying the time to calculating the effects of global warming.
“Today, there is no industry and no human endeavor that hasn’t been touched by microprocessors or microcontrollers,” said Federico Faggin, one of the trio of the microprocessor’s inventors, in a speech in 2009.
The first chip wasn’t very powerful; it was originally designed to perform math operations in a calculator called Busicom.The 4-bit microprocessor ran at a speed of 740 kilohertz, compared to top speeds above 4 gigahertz today. If the speed of cars had increased at the same pace as chips, it would take about one second to drive from San Francisco to New York. Today’s fastest Intel CPUs for PCs run 5,000 times faster than the 4004.
The microprocessor was a pioneering piece of work created by Faggin, Ted Hoff, Stan Mazor — all working for Intel. They created the chip at the behest of their Japanese customer, Masatoshi Shima, who worked for Busicom.
The chips got better and faster through a phenomenon known as Moore’s Law. Observed by Intel chairman emeritus Gordon Moore in 1965, the law holds that the number of transistors on a chip doubles every two years. That is possible because chip equipment can shrink the circuitry on a chip so that the grid of components is finer and finer. As the circuits become smaller, the electrons travel shorter distances, so the chips become faster. They consume less electricity and throw off less heat. The shorter distances mean the chips can become smaller and cost less to make.
Now the circuits are less than 32 nanometers wide and soon will be at 22 nanometers. By 2014, Intel is planning for 14 nanometers. A nanometer is a billionth of a meter. Over time, the microprocessors could do more and more things.
Intel lucked out by getting its microprocessors in the first IBM PC in 1981. In 2010, about 1 million PCs were shipped every day. By 2015, Gartner predicts there will be 2.25 billion PCs across the world, compared to 1 billion in 2008. The industry is expected to grow 10.5 percent to 387.8 million units in 2011. Each month, about 12 billion videos are watched on YouTube.
“When I was chief executive at Intel I remember one of our young engineers coming in and describing how you could build a little computer for the home,” Moore said in 2005. “I said, ‘gee that‘s fine. What would you use it for?‘ And the only application he could think of was housewives putting their recipes on it.”
After dozens of cycles of Moore’s Law, Intel can put billions of transistors on a single chip, compared to 2,300 transistors for the 4004. That kind of logarithmic change over time is like comparing the population of a large village to the population of China.
Today’s chips are built through a complex process involving hundreds of steps in clean rooms, or manufacturing environments where the air is 1,000 times cleaner than a hospital operating room. Each new factory now costs $5 billion to build.
The amusing thing about the microprocessor project was that everything seemed accidental and unorganized. Intel was founded in 1968 and work on the microprocessor began not longer after, in 1969.
Hoff was the 12th employee hired at Intel by chief executive Robert Noyce. Back in 1969, it didn’t really make sense to talk about personal computers, Hoff once said. Shima wanted a caculator with a bunch of custom chips. But Hoff proposed to make a calculator using a general-purpose chip that could be programmed to do a number of different functions, according to the book, The Microprocessor: A Biography, by Michael S. Malone.
Shima didn’t like Hoff’s suggestion. But Intel founders Noyce and Moore encouraged Hoff to pursue the idea anyway. The Japanese would come around. Hoff and Mazor designed a solution with four chips: the 4-bit CPU, or the 4004; a read-only memory chip to store programs; a random access memory chip to hold data; and a shift register to provide connections to keyboards and other devices.
But they didn’t know how to translate this architecture into a working chip design. The project fell behind schedule. Then, in April 1970, Intel lured Faggin away from rival Fairchild Semiconductor. He was immediately assigned to work on the microprocessor.
Shima arrived from Japan at Intel to find that no work had been done on it since his last visit.
“You bad! You bad!” Shima raged when he arrived.
Faggin said, “I just arrived here! I was hired yesterday!”
“You late,” Shima said.
Faggin went to work 12 to 16 hours a day, fixing the remaining architectural issues and then completing the logic and circuit design for the four chips. That was necessary to get them into manufacturing. Within three months, Faggin had done the work on translating the architecture into designs for four chips. In October, 1970, the chip prototypes were run through the factory. But on the first run, the 4004 didn’t work. In early 1971, corrections were made and new prototypes were manufactured. By March, Intel sent the first chips to Busicom. But Busicom wasn’t that interested in the late chip.
Noyce saw the value of the microprocessor, though, and he secured the right to make the chip for other customers in May, 1971. That turned out to be one of the best moves that Intel ever made.
Over time, the creators of the microprocessor received a series of accolades. In 1980, Hoff was named the first Intel Fellow and in 2010, President Barack Obama gave the trio the National Medal of Technology and Innovation.
In 2005, Moore observed, “What computers do well and what human brains do well are really completely different. I believe to get to real human intelligence we‘ll have to go back and… understand how the brain works better than we do now and take a completely different approach.”
Intel’s current CEO, Paul Otellini, said, “The transformations in computing have unleashed wave after wave of business and personal productivity and have provided billions of people with their first real opportunity to participate in the global economy. Yet, today I would submit that we are still at the very early stages in the evolution of computing. We‘ve just begun to see its impact on the course of history.”
He said that Intel is committing to making sure that Moore’s Law continues, despite the fact that Moore himself worries that the industry will run into fundamental limits of physics at some point in the future.
Now, researchers at IBM and elsewhere are trying to come up with chips that mimic the human brain. Justin Rattner, Intel’s chief technology officer, said in September, “The sheer number of advances in the next 40 years will equal or surpass all of the innovative activity that has taken place over the last 10,000 years of human history.”
He added, “At this point, it‘s fair to ask, what lies ahead? What lies beyond multi-core and many-core computing? And the answer… is called extreme-scale computing. This is multi-core and many-core computing at the extremes. Our 10 year goal here is to achieve nothing modest: a 300x improvement in energy efficiency for computing at the extremes.”
Here’s a video that Intel created about the 4004.
Filed under: VentureBeat