Sunday, November 14, 2010

LATEST TRENDS AND TECHNOLOGIES OF STORAGE, MEMORY, PROCESSOR, PRINTING

 One recent trend in the buisness world due largely to the fact that hard drive memory has gotten so much smaller and less expensive that in years past is that companies have emerged that do nothing but backup information for buisness on a daily basis for very low prices .Since the back up is done to a remote location often far from where the buisness is located files,  computer crashes , data loss etc become irrelevant.                                          
The information is always safe. In this way smaller buisness especially do not have to rely on their own tech staff or that of a tech company to bail them out when their system goes down . These services are quite reasonably priced and competition increasing .
Microprocessor
Intel 4004, the first general-purpose, commercial microprocessor
A microprocessor incorporates most or all of the functions of a computer’s central processing unit (CPU) on a single integrated circuit (IC, or microchip).[1] The first microprocessors emerged in the early 1970s and were used for electronic calculators, using binary-coded decimal (BCD) arithmetic on 4-bit words. Other embedded uses of 4-bit and 8-bit microprocessors, such as terminals, printers, various kinds of automation etc., followed soon after. Affordable 8-bit microprocessors with 16-bit addressing also led to the first general-purpose microcomputers from the mid-1970s on.
During the 1960s, computer processors were often constructed out of small and medium-scale ICs containing from tens to a few hundred transistors. The integration of a whole CPU onto a single chip greatly reduced the cost of processing power. From these humble beginnings, continued increases in microprocessor capacity have rendered other forms of computers almost completely obsolete (see history of computing hardware), with one or more microprocessors used in everything from the smallest embedded systems and handheld devices to the largest mainframes and supercomputers.
Since the early 1970s, the increase in capacity of microprocessors has been a consequence of Moore’s Law, which suggests that the number of transistors that can be fitted onto a chip doubles every two years. Although originally calculated as a doubling every year,[2] Moore later refined the period to two years.[3] It is often incorrectly quoted as a doubling of transistors every 18 months.
In the late 1990s, and in the high-performance microprocessor segment, heat generation (TDP), due to switching losses, static current leakage, and other factors, emerged as a leading developmental constraint.[4]
Contents
[hide]
* 1 Firsts
o 1.1 Intel 4004
o 1.2 TMS 1000
o 1.3 Pico/General Instrument
o 1.4 CADC
o 1.5 Gilbert Hyatt
o 1.6 Four-Phase Systems AL1
* 2 8-bit designs
* 3 12-bit designs
* 4 16-bit designs
* 5 32-bit designs
* 6 64-bit designs in personal computers
* 7 Multicore designs
* 8 RISC
* 9 Special-purpose designs
* 10 Market statistics
* 11 See also
* 12 Notes and references
* 13 External links
[edit] Firsts
Three projects delivered a microprocessor at about the same time: Intel’s 4004, Texas Instruments (TI) TMS 1000, and Garrett AiResearch’s Central Air Data Computer (CADC).
[edit] Intel 4004
The 4004 with cover removed (left) and as actually used (right).
The Intel 4004 is generally regarded as the first microprocessor,[5][6] and cost thousands of dollars.[7] The first known advertisement for the 4004 is dated November 1971 and appeared in Electronic News.[8] The project that produced the 4004 originated in 1969, when Busicom, a Japanese calculator manufacturer, asked Intel to build a chipset for high-performance desktop calculators. Busicom’s original design called for a programmable chip set consisting of seven different chips. Three of the chips were to make a special-purpose CPU with its program stored in ROM and its data stored in shift register read-write memory. Ted Hoff, the Intel engineer assigned to evaluate the project, believed the Busicom design could be simplified by using dynamic RAM storage for data, rather than shift register memory, and a more traditional general-purpose CPU architecture. Hoff came up with a four–chip architectural proposal: a ROM chip for storing the programs, a dynamic RAM chip for storing data, a simple I/O device and a 4-bit central processing unit (CPU). Although not a chip designer, he felt the CPU could be integrated into a single chip. This chip would later be called the 4004 microprocessor.
The architecture and specifications of the 4004 came from the interaction of Hoff with Stanley Mazor, a software engineer reporting to him, and with Busicom engineer Masatoshi Shima, during 1969. In April 1970, Intel hired Federico Faggin to lead the design of the four-chip set. Faggin, who originally developed the silicon gate technology (SGT) in 1968 at Fairchild Semiconductor[9] and designed the world’s first commercial integrated circuit using SGT, the Fairchild 3708, had the correct background to lead the project since it was SGT that made it possible to implement a single-chip CPU with the proper speed, power dissipation and cost. Faggin also developed the new methodology for random logic design, based on silicon gate, that made the 4004 possible. Production units of the 4004 were first delivered to Busicom in March 1971 and shipped to other customers in late 1971.
[edit] TMS 1000
The Smithsonian Institution says TI engineers Gary Boone and Michael Cochran succeeded in creating the first microcontroller (also called a microcomputer) in 1971. The result of their work was the TMS 1000, which went commercial in 1974.[10]
TI developed the 4-bit TMS 1000 and stressed pre-programmed embedded applications, introducing a version called the TMS1802NC on September 17, 1971 which implemented a calculator on a chip.
TI filed for the patent on the microprocessor. Gary Boone was awarded U.S. Patent 3,757,306 for the single-chip microprocessor architecture on September 4, 1973. It may never be known which company actually had the first working microprocessor running on the lab bench. In both 1971 and 1976, Intel and TI entered into broad patent cross-licensing agreements, with Intel paying royalties to TI for the microprocessor patent. A nice history of these events is contained in court documentation from a legal dispute between Cyrix and Intel, with TI as intervenor and owner of the microprocessor patent.
A computer-on-a-chip is a variation of a microprocessor that combines the microprocessor core (CPU), some program memory and read/write memory, and I/O (input/output) lines onto one chip. The computer-on-a-chip patent, called the “microcomputer patent” at the time, U.S. Patent 4,074,351, was awarded to Gary Boone and Michael J. Cochran of TI. Aside from this patent, the standard meaning of microcomputer is a computer using one or more microprocessors as its CPU(s), while the concept defined in the patent is more akin to a microcontroller.
[edit] Pico/General Instrument
The PICO1/GI250 chip introduced in 1971. This was designed by Pico Electronics (Glenrothes, Scotland) and manufactured by General Instrument of Hicksville NY
In 1971 Pico Electronics[11] and General Instrument (GI) introduced their first collaboration in ICs, a complete single chip calculator IC for the Monroe/Litton Royal Digital III calculator. This chip could also arguably lay claim to be one of the first microprocessors or microcontrollers having ROM, RAM and a RISC instruction set on-chip. The layout for the four layers of the PMOS process was hand drawn at x500 scale on mylar film, a significant task at the time given the complexity of the chip.
Pico was a spinout by five GI design engineers whose vision was to create single chip calculator ICs. They had significant previous design experience on multiple calculator chipsets with both GI and Marconi-Elliott.[12] The key team members had originally been tasked by Elliott Automation to create an 8 bit computer in MOS and had helped establish a MOS Research Laboratory in Glenrothes, Scotland in 1967.
Calculators were becoming the largest single market for semiconductors and Pico and GI went on to have significant success in this burgeoning market. GI continued to innovate in microprocessors and microcontrollers with products including the PIC1600, PIC1640 and PIC1650. In 1987 the GI Microelectronics business was spun out into the very successful PIC microcontroller business.
[edit] CADC
Question book-new.svg
This section needs references that appear in reliable third-party publications. Primary sources or sources affiliated with the subject are generally not sufficient for a Wikipedia article. Please add more appropriate citations from reliable sources. (March 2010)
In 1968, Garrett AiResearch (which employed designers Ray Holt and Steve Geller) was invited to produce a digital computer to compete with electromechanical systems then under development for the main flight control computer in the US Navy’s new F-14 Tomcat fighter. The design was complete by 1970, and used a MOS-based chipset as the core CPU. The design was significantly (approximately 20 times) smaller and much more reliable than the mechanical systems it competed against, and was used in all of the early Tomcat models. This system contained “a 20-bit, pipelined, parallel multi-microprocessor”. The Navy refused to allow publication of the design until 1997. For this reason the CADC, and the MP944 chipset it used, are fairly unknown.[13] Ray Holt graduated California Polytechnic University in 1968, and began his computer design career with the CADC. From its inception, it was shrouded in secrecy until 1998 when at Holt’s request, the US Navy allowed the documents into the public domain. Since then several[who?] have debated if this was the first microprocessor. Holt has stated that no one has compared this microprocessor with those that came later.[14] According to Parab et al. (2007), “The scientific papers and literature published around 1971 reveal that the MP944 digital processor used for the F-14 Tomcat aircraft of the US Navy qualifies as the first microprocessor. Although interesting, it was not a single-chip processor, and was not general purpose – it was more like a set of parallel building blocks you could use to make a special-purpose DSP form. It indicates that today’s industry theme of converging DSP-microcontroller architectures was started in 1971.”[15] This convergence of DSP and microcontroller architectures is known as a Digital Signal Controller.[citation needed]
[edit] Gilbert Hyatt
Gilbert Hyatt was awarded a patent claiming an invention pre-dating both TI and Intel, describing a “microcontroller”.[16] The patent was later invalidated, but not before substantial royalties were paid out.[17][18]
[edit] Four-Phase Systems AL1
The Four-Phase Systems AL1 was an 8-bit bit slice chip containing eight registers and an ALU.[19] It was designed by Lee Boysel in 1969.[20][21][22] At the time, it formed part of a nine-chip, 24-bit CPU with three AL1s, but it was later called a microprocessor when, in response to 1990s litigation by Texas Instruments, a demonstration system was constructed where a single AL1 formed part of a courtroom demonstration computer system, together with RAM, ROM, and an input-output device.[23]
[edit] 8-bit designs
The Intel 4004 was followed in 1972 by the Intel 8008, the world’s first 8-bit microprocessor. According to A History of Modern Computing, (MIT Press), pp. 220–21, Intel entered into a contract with Computer Terminals Corporation, later called Datapoint, of San Antonio TX, for a chip for a terminal they were designing. Datapoint later decided not to use the chip, and Intel marketed it as the 8008 in April, 1972. This was the world’s first 8-bit microprocessor. It was the basis for the famous “Mark-8″ computer kit advertised in the magazine Radio-Electronics in 1974.
The 8008 was the precursor to the very successful Intel 8080 (1974), Zilog Z80 (1976), and derivative Intel 8-bit processors. The competing Motorola 6800 was released August 1974 and the similar MOS Technology 6502 in 1975 (designed largely by the same people). The 6502 rivaled the Z80 in popularity during the 1980s.
A low overall cost, small packaging, simple computer bus requirements, and sometimes the integration of extra circuitry (e.g. the Z80’s built-in memory refresh circuitry) allowed the home computer “revolution” to accelerate sharply in the early 1980s. This delivered such inexpensive machines as the Sinclair ZX-81, which sold for US$99.
The Western Design Center, Inc. (WDC) introduced the CMOS 65C02 in 1982 and licensed the design to several firms. It was used as the CPU in the Apple IIe and IIc personal computers as well as in medical implantable grade pacemakers and defibrilators, automotive, industrial and consumer devices. WDC pioneered the licensing of microprocessor designs, later followed by ARM and other microprocessor Intellectual Property (IP) providers in the 1990s.
Motorola introduced the MC6809 in 1978, an ambitious and thought-through 8-bit design source compatible with the 6800 and implemented using purely hard-wired logic. (Subsequent 16-bit microprocessors typically used microcode to some extent, as CISC design requirements were getting too complex for purely hard-wired logic only.)
Another early 8-bit microprocessor was the Signetics 2650, which enjoyed a brief surge of interest due to its innovative and powerful instruction set architecture.
A seminal microprocessor in the world of spaceflight was RCA’s RCA 1802 (aka CDP1802, RCA COSMAC) (introduced in 1976), which was used onboard the Galileo probe to Jupiter (launched 1989, arrived 1995). RCA COSMAC was the first to implement CMOS technology. The CDP1802 was used because it could be run at very low power, and because a variant was available fabricated using a special production process (Silicon on Sapphire), providing much better protection against cosmic radiation and electrostatic discharges than that of any other processor of the era. Thus, the SOS version of the 1802 was said to be the first radiation-hardened microprocessor.
The RCA 1802 had what is called a static design, meaning that the clock frequency could be made arbitrarily low, even to 0 Hz, a total stop condition. This let the Galileo spacecraft use minimum electric power for long uneventful stretches of a voyage. Timers and/or sensors would awaken/improve the performance of the processor in time for important tasks, such as navigation updates, attitude control, data acquisition, and radio communication.
[edit] 12-bit designs
The Intersil 6100 family consisted of a 12-bit microprocessor (the 6100) and a range of peripheral support and memory ICs. The microprocessor recognised the DEC PDP-8 minicomputer instruction set. As such it was sometimes referred to as the CMOS-PDP8. Since it was also produced by Harris Corporation, it was also known as the Harris HM-6100. By virtue of its CMOS technology and associated benefits, the 6100 was being incorporated into some military designs until the early 1980s.
[edit] 16-bit designs
The first multi-chip 16-bit microprocessor was the National Semiconductor IMP-16, introduced in early 1973. An 8-bit version of the chipset was introduced in 1974 as the IMP-8.
Other early multi-chip 16-bit microprocessors include one used by Digital Equipment Corporation (DEC) in the LSI-11 OEM board set and the packaged PDP 11/03 minicomputer, and the Fairchild Semiconductor MicroFlame 9440, both of which were introduced in the 1975 to 1976 timeframe.
In 1975, National introduced the first 16-bit single-chip microprocessor, the National Semiconductor PACE, which was later followed by an NMOS version, the INS8900.
Another early single-chip 16-bit microprocessor was TI’s TMS 9900, which was also compatible with their TI-990 line of minicomputers. The 9900 was used in the TI 990/4 minicomputer, the TI-99/4A home computer, and the TM990 line of OEM microcomputer boards. The chip was packaged in a large ceramic 64-pin DIP package, while most 8-bit microprocessors such as the Intel 8080 used the more common, smaller, and less expensive plastic 40-pin DIP. A follow-on chip, the TMS 9980, was designed to compete with the Intel 8080, had the full TI 990 16-bit instruction set, used a plastic 40-pin package, moved data 8 bits at a time, but could only address 16 KB. A third chip, the TMS 9995, was a new design. The family later expanded to include the 99105 and 99110.
The Western Design Center, Inc. (WDC) introduced the CMOS 65816 16-bit upgrade of the WDC CMOS 65C02 in 1984. The 65816 16-bit microprocessor was the core of the Apple IIgs and later the Super Nintendo Entertainment System, making it one of the most popular 16-bit designs of all time.
Intel followed a different path, having no minicomputers to emulate, and instead “upsized” their 8080 design into the 16-bit Intel 8086, the first member of the x86 family, which powers most modern PC type computers. Intel introduced the 8086 as a cost effective way of porting software from the 8080 lines, and succeeded in winning much business on that premise. The 8088, a version of the 8086 that used an external 8-bit data bus, was the microprocessor in the first IBM PC, the model 5150. Following up their 8086 and 8088, Intel released the 80186, 80286 and, in 1985, the 32-bit 80386, cementing their PC market dominance with the processor family’s backwards compatibility. The 8086 and 80186 had a crude method of segmentation, while the 80286 introduced a full-featured semgented memory management unit (MMU), and the 80386 introduced a flat 32-bit memory model with paged memory management.
[edit] 32-bit designs
Upper interconnect layers on an Intel 80486DX2 die.
16-bit designs had only been on the market briefly when 32-bit implementations started to appear.
The most significant of the 32-bit designs is the MC68000, introduced in 1979. The 68K, as it was widely known, had 32-bit registers but used 16-bit internal data paths and a 16-bit external data bus to reduce pin count, and supported only 24-bit addresses. Motorola generally described it as a 16-bit processor, though it clearly has 32-bit architecture. The combination of high performance, large (16 megabytes or 224 bytes) memory space and fairly low cost made it the most popular CPU design of its class. The Apple Lisa and Macintosh designs made use of the 68000, as did a host of other designs in the mid-1980s, including the Atari ST and Commodore Amiga.
The world’s first single-chip fully-32-bit microprocessor, with 32-bit data paths, 32-bit buses, and 32-bit addresses, was the AT&T Bell Labs BELLMAC-32A, with first samples in 1980, and general production in 1982[24][25] After the divestiture of AT&T in 1984, it was renamed the WE 32000 (WE for Western Electric), and had two follow-on generations, the WE 32100 and WE 32200. These microprocessors were used in the AT&T 3B5 and 3B15 minicomputers; in the 3B2, the world’s first desktop supermicrocomputer; in the “Companion”, the world’s first 32-bit laptop computer; and in “Alexander”, the world’s first book-sized supermicrocomputer, featuring ROM-pack memory cartridges similar to today’s gaming consoles. All these systems ran the UNIX System V operating system.
Intel’s first 32-bit microprocessor was the iAPX 432, which was introduced in 1981 but was not a commercial success. It had an advanced capability-based object-oriented architecture, but poor performance compared to contemporary architectures such as Intel’s own 80286 (introduced 1982), which was almost four times as fast on typical benchmark tests. However, the results for the iAPX432 was partly due to a rushed and therefore suboptimal Ada compiler.
The ARM first appeared in 1985. This is a RISC processor design, which has since come to dominate the 32-bit embedded systems processor space due in large part to its power efficiency, its licensing model, and its wide selection of system development tools. Semiconductor manufacturers generally license cores such as the ARM11 and integrate them into their own system on a chip products; only a few such vendors are licensed to modify the ARM cores. Most cell phones include an ARM processor, as do a wide variety of other products. There are microcontroller-oriented ARM cores without virtual memory support, as well as SMP applications processors with virtual memory.
Motorola’s success with the 68000 led to the MC68010, which added virtual memory support. The MC68020, introduced in 1985 added full 32-bit data and address busses. The 68020 became hugely popular in the Unix supermicrocomputer market, and many small companies (e.g., Altos, Charles River Data Systems) produced desktop-size systems. The MC68030 was introduced next, improving upon the previous design by integrating the MMU into the chip. The continued success led to the MC68040, which included an FPU for better math performance. A 68050 failed to achieve its performance goals and was not released, and the follow-up MC68060 was released into a market saturated by much faster RISC designs. The 68K family faded from the desktop in the early 1990s.
Other large companies designed the 68020 and follow-ons into embedded equipment. At one point, there were more 68020s in embedded equipment than there were Intel Pentiums in PCs.[26] The ColdFire processor cores are derivatives of the venerable 68020.
During this time (early to mid-1980s), National Semiconductor introduced a very similar 16-bit pinout, 32-bit internal microprocessor called the NS 16032 (later renamed 32016), the full 32-bit version named the NS 32032. Later the NS 32132 was introduced which allowed two CPUs to reside on the same memory bus, with built in arbitration. The NS32016/32 outperformed the MC68000/10 but the NS32332 which arrived at approximately the same time the MC68020 did not have enough performance. The third generation chip, the NS32532 was different. It had about double the performance of the MC68030 which was released around the same time. The appearance of RISC processors like the AM29000 and MC88000 (now both dead) influenced the architecture of the final core, the NS32764. Technically advanced, using a superscalar RISC core, internally overclocked, with a 64 bit bus, it was still capable of executing Series 32000 instructions through real time translation. When National Semiconductor decided to leave the Unix market, the chip was redesigned into the Swordfish Embedded processor with a set of on chip peripherals. The chip turned out to be too expensive for the laser printer market and was killed. The design team went to Intel and there designed the Pentium processor which is very similar to the NS32764 core internally The big success of the Series 32000 was in the laser printer market, where the NS32CG16 with microcoded BitBlt instructions had very good price/performance and was adopted by large companies like Canon. By the mid-1980s, Sequent introduced the first symmetric multiprocessor (SMP) server-class computer using the NS 32032. This was one of the design’s few wins, and it disappeared in the late 1980s.
The MIPS R2000 (1984) and R3000 (1989) were highly successful 32-bit RISC microprocessors. They were used in high-end workstations and servers by SGI, among others.
Other designs included the interesting Zilog Z80000, which arrived too late to market to stand a chance and disappeared quickly.
In the late 1980s, “microprocessor wars” started killing off some of the microprocessors. Apparently, with only one major design win, Sequent, the NS 32032 just faded out of existence, and Sequent switched to Intel microprocessors.
From 1985 to 2003, the 32-bit x86 architectures became increasingly dominant in desktop, laptop, and server markets, and these microprocessors became faster and more capable. Intel had licensed early versions of the architecture to other companies, but declined to license the Pentium, so AMD and Cyrix built later versions of the architecture based on their own designs. During this span, these processors increased in complexity (transistor count) and capability (instructions/second) by at least three orders of magnitude. Intel’s Pentium line is probably the most famous and recognizable 32-bit processor model, at least with the public at large.
[edit] 64-bit designs in personal computers
While 64-bit microprocessor designs have been in use in several markets since the early 1990s, the early 2000s saw the introduction of 64-bit microprocessors targeted at the PC market.
With AMD’s introduction of a 64-bit architecture backwards-compatible with x86, x86-64 (also called AMD64), in September 2003, followed by Intel’s near fully compatible 64-bit extensions (first called IA-32e or EM64T, later renamed Intel 64), the 64-bit desktop era began. Both versions can run 32-bit legacy applications without any performance penalty as well as new 64-bit software. With operating systems Windows XP x64, Windows Vista x64, Windows 7 x64, Linux, BSD and Mac OS X that run 64-bit native, the software is also geared to fully utilize the capabilities of such processors. The move to 64 bits is more than just an increase in register size from the IA-32 as it also doubles the number of general-purpose registers.
The move to 64 bits by PowerPC processors had been intended since the processors’ design in the early 90s and was not a major cause of incompatibility. Existing integer registers are extended as are all related data pathways, but, as was the case with IA-32, both floating point and vector units had been operating at or above 64 bits for several years. Unlike what happened when IA-32 was extended to x86-64, no new general purpose registers were added in 64-bit PowerPC, so any performance gained when using the 64-bit mode for applications making no use of the larger address space is minimal.
[edit] Multicore designs
Front of Pentium D dual core processor
Back of Pentium D dual core processor
Main article: Multi-core (computing)
A different approach to improving a computer’s performance is to add extra processors, as in symmetric multiprocessing designs, which have been popular in servers and workstations since the early 1990s. Keeping up with Moore’s Law is becoming increasingly challenging as chip-making technologies approach the physical limits of the technology.
In response, the microprocessor manufacturers look for other ways to improve performance, in order to hold on to the momentum of constant upgrades in the market.
A multi-core processor is simply a single chip containing more than one microprocessor core, effectively multiplying the potential performance with the number of cores (as long as the operating system and software is designed to take advantage of more than one processor). Some components, such as bus interface and second level cache, may be shared between cores. Because the cores are physically very close they interface at much faster clock rates compared to discrete multiprocessor systems, improving overall system performance.
In 2005, the first personal computer dual-core processors were announced and as of 2009 dual-core and quad-core processors are widely used in servers, workstations and PCs while six and eight-core processors will be available for high-end applications in both the home and professional environments.
Sun Microsystems has released the Niagara and Niagara 2 chips, both of which feature an eight-core design. The Niagara 2 supports more threads and operates at 1.6 GHz.
High-end Intel Xeon processors that are on the LGA771 socket are DP (dual processor) capable, as well as the Intel Core 2 Extreme QX9775 also used in the Mac Pro by Apple and the Intel Skulltrail motherboard. With the transition to the LGA1366 and LGA1156 socket and the Intel i7 and i5 chips, quad core is now considered mainstream, but with the release of the i7-980x, six core processors are now well within reach.
[edit] RISC
In the mid-1980s to early-1990s, a crop of new high-performance Reduced Instruction Set Computer (RISC) microprocessors appeared, influenced by discrete RISC-like CPU designs such as the IBM 801 and others. RISC microprocessors were initially used in special-purpose machines and Unix workstations, but then gained wide acceptance in other roles.
In 1986, HP released its first system with a PA-RISC CPU. The first commercial RISC microprocessor design was released either by MIPS Computer Systems, the 32-bit R2000 (the R1000 was not released) or by Acorn computers, the 32-bit ARM2 in 1987.[citation needed] The R3000 made the design truly practical, and the R4000 introduced the world’s first commercially available 64-bit RISC microprocessor. Competing projects would result in the IBM POWER and Sun SPARC architectures. Soon every major vendor was releasing a RISC design, including the AT&T CRISP, AMD 29000, Intel i860 and Intel i960, Motorola 88000, DEC Alpha.
As of 2007, two 64-bit RISC architectures are still produced in volume for non-embedded applications: SPARC and Power ISA.
[edit] Special-purpose designs
Though the term “microprocessor” has traditionally referred to a single- or multi-chip CPU or system-on-a-chip (SoC), several types of specialized processing devices have followed from the technology. The most common examples are microcontrollers, digital signal processors (DSP) and graphics processing units (GPU). Many examples of these are either not programmable, or have limited programming facilities. For example, in general GPUs through the 1990s were mostly non-programmable and have only recently gained limited facilities like programmable vertex shaders. There is no universal consensus on what defines a “microprocessor”, but it is usually safe to assume that the term refers to a general-purpose CPU of some sort and not a special-purpose processor unless specifically noted.
[edit] Market statistics
In 2003, about $44 billion (USD) worth of microprocessors were manufactured and sold.[27] Although about half of that money was spent on CPUs used in desktop or laptop personal computers, those count for only about 0.2% of all CPUs sold.[citation needed]
About 55% of all CPUs sold in the world are 8-bit microcontrollers, over two billion of which were sold in 1997.[28]
As of 2002, less than 10% of all the CPUs sold in the world are 32-bit or more. Of all the 32-bit CPUs sold, about 2% are used in desktop or laptop personal computers. Most microprocessors are used in embedded control applications such as household appliances, automobiles, and computer peripherals. Taken as a whole, the average price for a microprocessor, microcontroller, or DSP is just over $6.[29]
About ten billion CPUs were manufactured in 2008. About 98% of new CPUs produced each year are embedded.[30]
Microprocessor
Intel 4004, the first general-purpose, commercial microprocessor
A microprocessor incorporates most or all of the functions of a computer’s central processing unit (CPU) on a single integrated circuit (IC, or microchip).[1] The first microprocessors emerged in the early 1970s and were used for electronic calculators, using binary-coded decimal (BCD) arithmetic on 4-bit words. Other embedded uses of 4-bit and 8-bit microprocessors, such as terminals, printers, various kinds of automation etc., followed soon after. Affordable 8-bit microprocessors with 16-bit addressing also led to the first general-purpose microcomputers from the mid-1970s on.
During the 1960s, computer processors were often constructed out of small and medium-scale ICs containing from tens to a few hundred transistors. The integration of a whole CPU onto a single chip greatly reduced the cost of processing power. From these humble beginnings, continued increases in microprocessor capacity have rendered other forms of computers almost completely obsolete (see history of computing hardware), with one or more microprocessors used in everything from the smallest embedded systems and handheld devices to the largest mainframes and supercomputers.
Since the early 1970s, the increase in capacity of microprocessors has been a consequence of Moore’s Law, which suggests that the number of transistors that can be fitted onto a chip doubles every two years. Although originally calculated as a doubling every year,[2] Moore later refined the period to two years.[3] It is often incorrectly quoted as a doubling of transistors every 18 months.
In the late 1990s, and in the high-performance microprocessor segment, heat generation (TDP), due to switching losses, static current leakage, and other factors, emerged as a leading developmental constraint.[4]Contents[hide]
* 1 Firsts          o 1.1 Intel 4004          o 1.2 TMS 1000          o 1.3 Pico/General Instrument          o 1.4 CADC          o 1.5 Gilbert Hyatt          o 1.6 Four-Phase Systems AL1    * 2 8-bit designs    * 3 12-bit designs    * 4 16-bit designs    * 5 32-bit designs    * 6 64-bit designs in personal computers    * 7 Multicore designs    * 8 RISC    * 9 Special-purpose designs    * 10 Market statistics    * 11 See also    * 12 Notes and references    * 13 External links
[edit] Firsts
Three projects delivered a microprocessor at about the same time: Intel’s 4004, Texas Instruments (TI) TMS 1000, and Garrett AiResearch’s Central Air Data Computer (CADC).[edit] Intel 4004The 4004 with cover removed (left) and as actually used (right).
The Intel 4004 is generally regarded as the first microprocessor,[5][6] and cost thousands of dollars.[7] The first known advertisement for the 4004 is dated November 1971 and appeared in Electronic News.[8] The project that produced the 4004 originated in 1969, when Busicom, a Japanese calculator manufacturer, asked Intel to build a chipset for high-performance desktop calculators. Busicom’s original design called for a programmable chip set consisting of seven different chips. Three of the chips were to make a special-purpose CPU with its program stored in ROM and its data stored in shift register read-write memory. Ted Hoff, the Intel engineer assigned to evaluate the project, believed the Busicom design could be simplified by using dynamic RAM storage for data, rather than shift register memory, and a more traditional general-purpose CPU architecture. Hoff came up with a four–chip architectural proposal: a ROM chip for storing the programs, a dynamic RAM chip for storing data, a simple I/O device and a 4-bit central processing unit (CPU). Although not a chip designer, he felt the CPU could be integrated into a single chip. This chip would later be called the 4004 microprocessor.
The architecture and specifications of the 4004 came from the interaction of Hoff with Stanley Mazor, a software engineer reporting to him, and with Busicom engineer Masatoshi Shima, during 1969. In April 1970, Intel hired Federico Faggin to lead the design of the four-chip set. Faggin, who originally developed the silicon gate technology (SGT) in 1968 at Fairchild Semiconductor[9] and designed the world’s first commercial integrated circuit using SGT, the Fairchild 3708, had the correct background to lead the project since it was SGT that made it possible to implement a single-chip CPU with the proper speed, power dissipation and cost. Faggin also developed the new methodology for random logic design, based on silicon gate, that made the 4004 possible. Production units of the 4004 were first delivered to Busicom in March 1971 and shipped to other customers in late 1971.[edit] TMS 1000
The Smithsonian Institution says TI engineers Gary Boone and Michael Cochran succeeded in creating the first microcontroller (also called a microcomputer) in 1971. The result of their work was the TMS 1000, which went commercial in 1974.[10]
TI developed the 4-bit TMS 1000 and stressed pre-programmed embedded applications, introducing a version called the TMS1802NC on September 17, 1971 which implemented a calculator on a chip.
TI filed for the patent on the microprocessor. Gary Boone was awarded U.S. Patent 3,757,306 for the single-chip microprocessor architecture on September 4, 1973. It may never be known which company actually had the first working microprocessor running on the lab bench. In both 1971 and 1976, Intel and TI entered into broad patent cross-licensing agreements, with Intel paying royalties to TI for the microprocessor patent. A nice history of these events is contained in court documentation from a legal dispute between Cyrix and Intel, with TI as intervenor and owner of the microprocessor patent.
A computer-on-a-chip is a variation of a microprocessor that combines the microprocessor core (CPU), some program memory and read/write memory, and I/O (input/output) lines onto one chip. The computer-on-a-chip patent, called the “microcomputer patent” at the time, U.S. Patent 4,074,351, was awarded to Gary Boone and Michael J. Cochran of TI. Aside from this patent, the standard meaning of microcomputer is a computer using one or more microprocessors as its CPU(s), while the concept defined in the patent is more akin to a microcontroller.[edit] Pico/General InstrumentThe PICO1/GI250 chip introduced in 1971. This was designed by Pico Electronics (Glenrothes, Scotland) and manufactured by General Instrument of Hicksville NY
In 1971 Pico Electronics[11] and General Instrument (GI) introduced their first collaboration in ICs, a complete single chip calculator IC for the Monroe/Litton Royal Digital III calculator. This chip could also arguably lay claim to be one of the first microprocessors or microcontrollers having ROM, RAM and a RISC instruction set on-chip. The layout for the four layers of the PMOS process was hand drawn at x500 scale on mylar film, a significant task at the time given the complexity of the chip.
Pico was a spinout by five GI design engineers whose vision was to create single chip calculator ICs. They had significant previous design experience on multiple calculator chipsets with both GI and Marconi-Elliott.[12] The key team members had originally been tasked by Elliott Automation to create an 8 bit computer in MOS and had helped establish a MOS Research Laboratory in Glenrothes, Scotland in 1967.
Calculators were becoming the largest single market for semiconductors and Pico and GI went on to have significant success in this burgeoning market. GI continued to innovate in microprocessors and microcontrollers with products including the PIC1600, PIC1640 and PIC1650. In 1987 the GI Microelectronics business was spun out into the very successful PIC microcontroller business.[edit] CADCQuestion book-new.svg This section needs references that appear in reliable third-party publications. Primary sources or sources affiliated with the subject are generally not sufficient for a Wikipedia article. Please add more appropriate citations from reliable sources. (March 2010)
In 1968, Garrett AiResearch (which employed designers Ray Holt and Steve Geller) was invited to produce a digital computer to compete with electromechanical systems then under development for the main flight control computer in the US Navy’s new F-14 Tomcat fighter. The design was complete by 1970, and used a MOS-based chipset as the core CPU. The design was significantly (approximately 20 times) smaller and much more reliable than the mechanical systems it competed against, and was used in all of the early Tomcat models. This system contained “a 20-bit, pipelined, parallel multi-microprocessor”. The Navy refused to allow publication of the design until 1997. For this reason the CADC, and the MP944 chipset it used, are fairly unknown.[13] Ray Holt graduated California Polytechnic University in 1968, and began his computer design career with the CADC. From its inception, it was shrouded in secrecy until 1998 when at Holt’s request, the US Navy allowed the documents into the public domain. Since then several[who?] have debated if this was the first microprocessor. Holt has stated that no one has compared this microprocessor with those that came later.[14] According to Parab et al. (2007), “The scientific papers and literature published around 1971 reveal that the MP944 digital processor used for the F-14 Tomcat aircraft of the US Navy qualifies as the first microprocessor. Although interesting, it was not a single-chip processor, and was not general purpose – it was more like a set of parallel building blocks you could use to make a special-purpose DSP form. It indicates that today’s industry theme of converging DSP-microcontroller architectures was started in 1971.”[15] This convergence of DSP and microcontroller architectures is known as a Digital Signal Controller.[citation needed][edit] Gilbert Hyatt
Gilbert Hyatt was awarded a patent claiming an invention pre-dating both TI and Intel, describing a “microcontroller”.[16] The patent was later invalidated, but not before substantial royalties were paid out.[17][18][edit] Four-Phase Systems AL1
The Four-Phase Systems AL1 was an 8-bit bit slice chip containing eight registers and an ALU.[19] It was designed by Lee Boysel in 1969.[20][21][22] At the time, it formed part of a nine-chip, 24-bit CPU with three AL1s, but it was later called a microprocessor when, in response to 1990s litigation by Texas Instruments, a demonstration system was constructed where a single AL1 formed part of a courtroom demonstration computer system, together with RAM, ROM, and an input-output device.[23][edit] 8-bit designs
The Intel 4004 was followed in 1972 by the Intel 8008, the world’s first 8-bit microprocessor. According to A History of Modern Computing, (MIT Press), pp. 220–21, Intel entered into a contract with Computer Terminals Corporation, later called Datapoint, of San Antonio TX, for a chip for a terminal they were designing. Datapoint later decided not to use the chip, and Intel marketed it as the 8008 in April, 1972. This was the world’s first 8-bit microprocessor. It was the basis for the famous “Mark-8″ computer kit advertised in the magazine Radio-Electronics in 1974.
The 8008 was the precursor to the very successful Intel 8080 (1974), Zilog Z80 (1976), and derivative Intel 8-bit processors. The competing Motorola 6800 was released August 1974 and the similar MOS Technology 6502 in 1975 (designed largely by the same people). The 6502 rivaled the Z80 in popularity during the 1980s.
A low overall cost, small packaging, simple computer bus requirements, and sometimes the integration of extra circuitry (e.g. the Z80’s built-in memory refresh circuitry) allowed the home computer “revolution” to accelerate sharply in the early 1980s. This delivered such inexpensive machines as the Sinclair ZX-81, which sold for US$99.
The Western Design Center, Inc. (WDC) introduced the CMOS 65C02 in 1982 and licensed the design to several firms. It was used as the CPU in the Apple IIe and IIc personal computers as well as in medical implantable grade pacemakers and defibrilators, automotive, industrial and consumer devices. WDC pioneered the licensing of microprocessor designs, later followed by ARM and other microprocessor Intellectual Property (IP) providers in the 1990s.
Motorola introduced the MC6809 in 1978, an ambitious and thought-through 8-bit design source compatible with the 6800 and implemented using purely hard-wired logic. (Subsequent 16-bit microprocessors typically used microcode to some extent, as CISC design requirements were getting too complex for purely hard-wired logic only.)
Another early 8-bit microprocessor was the Signetics 2650, which enjoyed a brief surge of interest due to its innovative and powerful instruction set architecture.
A seminal microprocessor in the world of spaceflight was RCA’s RCA 1802 (aka CDP1802, RCA COSMAC) (introduced in 1976), which was used onboard the Galileo probe to Jupiter (launched 1989, arrived 1995). RCA COSMAC was the first to implement CMOS technology. The CDP1802 was used because it could be run at very low power, and because a variant was available fabricated using a special production process (Silicon on Sapphire), providing much better protection against cosmic radiation and electrostatic discharges than that of any other processor of the era. Thus, the SOS version of the 1802 was said to be the first radiation-hardened microprocessor.
The RCA 1802 had what is called a static design, meaning that the clock frequency could be made arbitrarily low, even to 0 Hz, a total stop condition. This let the Galileo spacecraft use minimum electric power for long uneventful stretches of a voyage. Timers and/or sensors would awaken/improve the performance of the processor in time for important tasks, such as navigation updates, attitude control, data acquisition, and radio communication.[edit] 12-bit designs
The Intersil 6100 family consisted of a 12-bit microprocessor (the 6100) and a range of peripheral support and memory ICs. The microprocessor recognised the DEC PDP-8 minicomputer instruction set. As such it was sometimes referred to as the CMOS-PDP8. Since it was also produced by Harris Corporation, it was also known as the Harris HM-6100. By virtue of its CMOS technology and associated benefits, the 6100 was being incorporated into some military designs until the early 1980s.[edit] 16-bit designs
The first multi-chip 16-bit microprocessor was the National Semiconductor IMP-16, introduced in early 1973. An 8-bit version of the chipset was introduced in 1974 as the IMP-8.
Other early multi-chip 16-bit microprocessors include one used by Digital Equipment Corporation (DEC) in the LSI-11 OEM board set and the packaged PDP 11/03 minicomputer, and the Fairchild Semiconductor MicroFlame 9440, both of which were introduced in the 1975 to 1976 timeframe.
In 1975, National introduced the first 16-bit single-chip microprocessor, the National Semiconductor PACE, which was later followed by an NMOS version, the INS8900.
Another early single-chip 16-bit microprocessor was TI’s TMS 9900, which was also compatible with their TI-990 line of minicomputers. The 9900 was used in the TI 990/4 minicomputer, the TI-99/4A home computer, and the TM990 line of OEM microcomputer boards. The chip was packaged in a large ceramic 64-pin DIP package, while most 8-bit microprocessors such as the Intel 8080 used the more common, smaller, and less expensive plastic 40-pin DIP. A follow-on chip, the TMS 9980, was designed to compete with the Intel 8080, had the full TI 990 16-bit instruction set, used a plastic 40-pin package, moved data 8 bits at a time, but could only address 16 KB. A third chip, the TMS 9995, was a new design. The family later expanded to include the 99105 and 99110.
The Western Design Center, Inc. (WDC) introduced the CMOS 65816 16-bit upgrade of the WDC CMOS 65C02 in 1984. The 65816 16-bit microprocessor was the core of the Apple IIgs and later the Super Nintendo Entertainment System, making it one of the most popular 16-bit designs of all time.
Intel followed a different path, having no minicomputers to emulate, and instead “upsized” their 8080 design into the 16-bit Intel 8086, the first member of the x86 family, which powers most modern PC type computers. Intel introduced the 8086 as a cost effective way of porting software from the 8080 lines, and succeeded in winning much business on that premise. The 8088, a version of the 8086 that used an external 8-bit data bus, was the microprocessor in the first IBM PC, the model 5150. Following up their 8086 and 8088, Intel released the 80186, 80286 and, in 1985, the 32-bit 80386, cementing their PC market dominance with the processor family’s backwards compatibility. The 8086 and 80186 had a crude method of segmentation, while the 80286 introduced a full-featured semgented memory management unit (MMU), and the 80386 introduced a flat 32-bit memory model with paged memory management.[edit] 32-bit designsUpper interconnect layers on an Intel 80486DX2 die.
16-bit designs had only been on the market briefly when 32-bit implementations started to appear.
The most significant of the 32-bit designs is the MC68000, introduced in 1979. The 68K, as it was widely known, had 32-bit registers but used 16-bit internal data paths and a 16-bit external data bus to reduce pin count, and supported only 24-bit addresses. Motorola generally described it as a 16-bit processor, though it clearly has 32-bit architecture. The combination of high performance, large (16 megabytes or 224 bytes) memory space and fairly low cost made it the most popular CPU design of its class. The Apple Lisa and Macintosh designs made use of the 68000, as did a host of other designs in the mid-1980s, including the Atari ST and Commodore Amiga.
The world’s first single-chip fully-32-bit microprocessor, with 32-bit data paths, 32-bit buses, and 32-bit addresses, was the AT&T Bell Labs BELLMAC-32A, with first samples in 1980, and general production in 1982[24][25] After the divestiture of AT&T in 1984, it was renamed the WE 32000 (WE for Western Electric), and had two follow-on generations, the WE 32100 and WE 32200. These microprocessors were used in the AT&T 3B5 and 3B15 minicomputers; in the 3B2, the world’s first desktop supermicrocomputer; in the “Companion”, the world’s first 32-bit laptop computer; and in “Alexander”, the world’s first book-sized supermicrocomputer, featuring ROM-pack memory cartridges similar to today’s gaming consoles. All these systems ran the UNIX System V operating system.
Intel’s first 32-bit microprocessor was the iAPX 432, which was introduced in 1981 but was not a commercial success. It had an advanced capability-based object-oriented architecture, but poor performance compared to contemporary architectures such as Intel’s own 80286 (introduced 1982), which was almost four times as fast on typical benchmark tests. However, the results for the iAPX432 was partly due to a rushed and therefore suboptimal Ada compiler.
The ARM first appeared in 1985. This is a RISC processor design, which has since come to dominate the 32-bit embedded systems processor space due in large part to its power efficiency, its licensing model, and its wide selection of system development tools. Semiconductor manufacturers generally license cores such as the ARM11 and integrate them into their own system on a chip products; only a few such vendors are licensed to modify the ARM cores. Most cell phones include an ARM processor, as do a wide variety of other products. There are microcontroller-oriented ARM cores without virtual memory support, as well as SMP applications processors with virtual memory.
Motorola’s success with the 68000 led to the MC68010, which added virtual memory support. The MC68020, introduced in 1985 added full 32-bit data and address busses. The 68020 became hugely popular in the Unix supermicrocomputer market, and many small companies (e.g., Altos, Charles River Data Systems) produced desktop-size systems. The MC68030 was introduced next, improving upon the previous design by integrating the MMU into the chip. The continued success led to the MC68040, which included an FPU for better math performance. A 68050 failed to achieve its performance goals and was not released, and the follow-up MC68060 was released into a market saturated by much faster RISC designs. The 68K family faded from the desktop in the early 1990s.
Other large companies designed the 68020 and follow-ons into embedded equipment. At one point, there were more 68020s in embedded equipment than there were Intel Pentiums in PCs.[26] The ColdFire processor cores are derivatives of the venerable 68020.
During this time (early to mid-1980s), National Semiconductor introduced a very similar 16-bit pinout, 32-bit internal microprocessor called the NS 16032 (later renamed 32016), the full 32-bit version named the NS 32032. Later the NS 32132 was introduced which allowed two CPUs to reside on the same memory bus, with built in arbitration. The NS32016/32 outperformed the MC68000/10 but the NS32332 which arrived at approximately the same time the MC68020 did not have enough performance. The third generation chip, the NS32532 was different. It had about double the performance of the MC68030 which was released around the same time. The appearance of RISC processors like the AM29000 and MC88000 (now both dead) influenced the architecture of the final core, the NS32764. Technically advanced, using a superscalar RISC core, internally overclocked, with a 64 bit bus, it was still capable of executing Series 32000 instructions through real time translation. When National Semiconductor decided to leave the Unix market, the chip was redesigned into the Swordfish Embedded processor with a set of on chip peripherals. The chip turned out to be too expensive for the laser printer market and was killed. The design team went to Intel and there designed the Pentium processor which is very similar to the NS32764 core internally The big success of the Series 32000 was in the laser printer market, where the NS32CG16 with microcoded BitBlt instructions had very good price/performance and was adopted by large companies like Canon. By the mid-1980s, Sequent introduced the first symmetric multiprocessor (SMP) server-class computer using the NS 32032. This was one of the design’s few wins, and it disappeared in the late 1980s.
The MIPS R2000 (1984) and R3000 (1989) were highly successful 32-bit RISC microprocessors. They were used in high-end workstations and servers by SGI, among others.
Other designs included the interesting Zilog Z80000, which arrived too late to market to stand a chance and disappeared quickly.
In the late 1980s, “microprocessor wars” started killing off some of the microprocessors. Apparently, with only one major design win, Sequent, the NS 32032 just faded out of existence, and Sequent switched to Intel microprocessors.
From 1985 to 2003, the 32-bit x86 architectures became increasingly dominant in desktop, laptop, and server markets, and these microprocessors became faster and more capable. Intel had licensed early versions of the architecture to other companies, but declined to license the Pentium, so AMD and Cyrix built later versions of the architecture based on their own designs. During this span, these processors increased in complexity (transistor count) and capability (instructions/second) by at least three orders of magnitude. Intel’s Pentium line is probably the most famous and recognizable 32-bit processor model, at least with the public at large.[edit] 64-bit designs in personal computers
While 64-bit microprocessor designs have been in use in several markets since the early 1990s, the early 2000s saw the introduction of 64-bit microprocessors targeted at the PC market.
With AMD’s introduction of a 64-bit architecture backwards-compatible with x86, x86-64 (also called AMD64), in September 2003, followed by Intel’s near fully compatible 64-bit extensions (first called IA-32e or EM64T, later renamed Intel 64), the 64-bit desktop era began. Both versions can run 32-bit legacy applications without any performance penalty as well as new 64-bit software. With operating systems Windows XP x64, Windows Vista x64, Windows 7 x64, Linux, BSD and Mac OS X that run 64-bit native, the software is also geared to fully utilize the capabilities of such processors. The move to 64 bits is more than just an increase in register size from the IA-32 as it also doubles the number of general-purpose registers.
The move to 64 bits by PowerPC processors had been intended since the processors’ design in the early 90s and was not a major cause of incompatibility. Existing integer registers are extended as are all related data pathways, but, as was the case with IA-32, both floating point and vector units had been operating at or above 64 bits for several years. Unlike what happened when IA-32 was extended to x86-64, no new general purpose registers were added in 64-bit PowerPC, so any performance gained when using the 64-bit mode for applications making no use of the larger address space is minimal.[edit] Multicore designsFront of Pentium D dual core processorBack of Pentium D dual core processorMain article: Multi-core (computing)
A different approach to improving a computer’s performance is to add extra processors, as in symmetric multiprocessing designs, which have been popular in servers and workstations since the early 1990s. Keeping up with Moore’s Law is becoming increasingly challenging as chip-making technologies approach the physical limits of the technology.
In response, the microprocessor manufacturers look for other ways to improve performance, in order to hold on to the momentum of constant upgrades in the market.
A multi-core processor is simply a single chip containing more than one microprocessor core, effectively multiplying the potential performance with the number of cores (as long as the operating system and software is designed to take advantage of more than one processor). Some components, such as bus interface and second level cache, may be shared between cores. Because the cores are physically very close they interface at much faster clock rates compared to discrete multiprocessor systems, improving overall system performance.
In 2005, the first personal computer dual-core processors were announced and as of 2009 dual-core and quad-core processors are widely used in servers, workstations and PCs while six and eight-core processors will be available for high-end applications in both the home and professional environments.
Sun Microsystems has released the Niagara and Niagara 2 chips, both of which feature an eight-core design. The Niagara 2 supports more threads and operates at 1.6 GHz.
High-end Intel Xeon processors that are on the LGA771 socket are DP (dual processor) capable, as well as the Intel Core 2 Extreme QX9775 also used in the Mac Pro by Apple and the Intel Skulltrail motherboard. With the transition to the LGA1366 and LGA1156 socket and the Intel i7 and i5 chips, quad core is now considered mainstream, but with the release of the i7-980x, six core processors are now well within reach.[edit] RISC
In the mid-1980s to early-1990s, a crop of new high-performance Reduced Instruction Set Computer (RISC) microprocessors appeared, influenced by discrete RISC-like CPU designs such as the IBM 801 and others. RISC microprocessors were initially used in special-purpose machines and Unix workstations, but then gained wide acceptance in other roles.
In 1986, HP released its first system with a PA-RISC CPU. The first commercial RISC microprocessor design was released either by MIPS Computer Systems, the 32-bit R2000 (the R1000 was not released) or by Acorn computers, the 32-bit ARM2 in 1987.[citation needed] The R3000 made the design truly practical, and the R4000 introduced the world’s first commercially available 64-bit RISC microprocessor. Competing projects would result in the IBM POWER and Sun SPARC architectures. Soon every major vendor was releasing a RISC design, including the AT&T CRISP, AMD 29000, Intel i860 and Intel i960, Motorola 88000, DEC Alpha.
As of 2007, two 64-bit RISC architectures are still produced in volume for non-embedded applications: SPARC and Power ISA.[edit] Special-purpose designs
Though the term “microprocessor” has traditionally referred to a single- or multi-chip CPU or system-on-a-chip (SoC), several types of specialized processing devices have followed from the technology. The most common examples are microcontrollers, digital signal processors (DSP) and graphics processing units (GPU). Many examples of these are either not programmable, or have limited programming facilities. For example, in general GPUs through the 1990s were mostly non-programmable and have only recently gained limited facilities like programmable vertex shaders. There is no universal consensus on what defines a “microprocessor”, but it is usually safe to assume that the term refers to a general-purpose CPU of some sort and not a special-purpose processor unless specifically noted.[edit] Market statistics
In 2003, about $44 billion (USD) worth of microprocessors were manufactured and sold.[27] Although about half of that money was spent on CPUs used in desktop or laptop personal computers, those count for only about 0.2% of all CPUs sold.[citation needed]
About 55% of all CPUs sold in the world are 8-bit microcontrollers, over two billion of which were sold in 1997.[28]
As of 2002, less than 10% of all the CPUs sold in the world are 32-bit or more. Of all the 32-bit CPUs sold, about 2% are used in desktop or laptop personal computers. Most microprocessors are used in embedded control applications such as household appliances, automobiles, and computer peripherals. Taken as a whole, the average price for a microprocessor, microcontroller, or DSP is just over $6.[29]
About ten billion CPUs were manufactured in 2008. About 98% of new CPUs produced each year are embedded.[30]
TRENDS IN PRINTERS

Epson-Artisan-810Epson Artisan 810All-In-One Inkjet Printer, $229.99

The Epson Artisan 810 is a jack of all trades, but doesn’t sacrifice quality for features like many multi-function printers. A built-in 7.8-inch touch panel with a 3.5-inch LCD display sets the Artisan 810 apart from the average all-in-one. It makes an excellent photo printer and supports both WiFi and fast Ethernet (10/100 megabits per second). Other notable features of the Artisan 810 include an integrated media card reader, a 48-bit color scanner, and CD-printing capability.
Check out our Epson Artisan 800 Review.

Lexmark-Platinum-Pro-905Lexmark Platinum Pro 9054-in-1 Inkjet Printer, $299.99

The Platinum Pro 905 offers the lowest black ink cost at around only 1 cent per page. Efficiency is also built-in with its 300 page input capacity and 50-page automatic document feeder. A 4.3-inch LCD touchscreen makes using the printer a breeze, and it also ships ready for Ethernet and 802.11n wireless networking so it can be setup anywhere in range of your wireless router. To top everything off, Lexmark offers one of the industry’s best warranties with a 5-year warranty plus lifetime priority phone support.
Check out CNET’s Lexmark Platinum Pro 905 Review.

HP-Officejet-Pro-8500HP Officejet Pro 8500Wireless All-In-One Inkjet Printer, $279.99

HP’s Officejet Pro 8500 allows users to print color for up to 50% less per page and use less energy than laser printers. Its built-in wireless networking allows setup from any room of the home as well. A 3.4-inch touchscreen LCD allows users to manage all the printer’s functions with the touch of a finger and the 8500’s scanner can scan up to 8.5-inch x 14-inch sized paper.
Check out PC World’s HP Officejet Pro 8500 Review.

Dell-3130cnDell 3130cnColor Laser Printer, $449.00

Small offices looking for an affordable starter printer should look no further then the Dell 3130cn color laser printer. The 3130cn produces professional-quality prints at up to 600 x 600 dpi and can even continue printing in black when the color cartridge runs out. The unit is fast too, with 26 ppm in color and 31 ppm in black, as well as durable with a 70,000 page per month duty cycle. The 3130cn’s versatility shines through with 802.11b/g networking, two sided-printing, and its additional 550-sheet paper drawer which brings its maximum input capacity to 950 sheets of paper.
Check out PC World’s Dell 3130cn Review.

HP-Photosmart-D7560HP Photosmart D7560Inkjet Printer, $99.99

If you’re searching for an affordable photo printer, then the Photosmart D7560 is worth a look. The D7560 features ultra-high resolution full-color 9600 x 2400 dpi printing and even prints on optical discs. Even with a focus on photo printing though, the D7560 still does an adequate job with text and graphics with blazing-fast speeds of up to 33 ppm black and 31 ppm color. Also featured is a 3.5-inch touchscreen for easy navigation as well as photo viewing and editing.
TRENDS IN MEMORY & STORAGE
Memory Trends 2010
October and November are perhaps the best time to look at memory trend for the coming New Year. At this point, the Christmas electronic production rush is probably over. New consumer electronics have already been introduced for the Christmas Season. However, we must caution that the 2009 year end sales Season will be different. This is due to the Lunar New Year falling on February 14, 2010 of the Western calendar.
Together with the economic progression in China and the recovery of the Pacific Rimeconomy, the Lunar New Year pre-holiday market cannot be ignored. I am, therefore, predicting that the year end electronic market will carry into January and even early February of 2010. As with demand in electronics, the need for memory, a key component, will also be increased.
DDR2 to DDR3 change over.
History have taught us that technology alone did not turn people to new generation of memory, it was price parity that prompted the consumer to change. DDR3 memory has been in the laboring stage for the last two years. Manufacturer over capacity had kept DDR2 prices down. Consumer has no incentive to switch to DDR3 memory. Memory price depression persisted until summer of 2009 when inventory finally tightened and DDR2 prices went from $0.80 to over $2.65 on 1 Gb devices (See DXI index). DDR2 prices had finally surpassed DDR3 in October 2009.
I see this price trend will continue until after the Lunar New Year when demand subsides.
Window 7 creates demand for PC memory
Window 7, the new Microsoft OS system, was introduced on October 22, 2009. Early report indicates market acceptance double that of Windows Vista. Consumers like the ease-of-use and less intimidation. Considering the fact that majority Corporate USA has not upgraded their OS for 5 years already, PC shipment rate will probably increase. With price parity between DDR2 and DDR3, DDR3 will be adopted very fast. I can see 30% DDR3 by summer 2010 and 50% adoption rate by year end 2010. This would help the DRAM industry to rebuild.
Notebook Memory to dominate
As for home PC users, 2010 would be the year to convert from desktop to laptop. We can see that laptops are already outselling desktops in the home computing market. Together with the attractive prices of Netbook and Media Computers, portability will be the seller. With the prices of Netbook at under $200, I am also seeing the computer penetration into the grass root of emerging countries like China and India. We should see PC adaptation rate to grow exponentially in many parts of the world. SODIMM memory module will be the favorite.
Mobile Memory
Smart Phones will create a new demand for memory. Apple iPhone and RIM Blackberry had set the stage and standard for smart phones, Android phones will open the flood gate. 2010 will be the year that smart phones can take hold and become a household necessity. New smart phones use low power LPDDR DRAM for operational memory and NAND Flash for storage memory. Memory consumption will be big and getting bigger. Memory will come in the form of MCP (multi-chip package). That means Nand Flash would be physically stacked over LPDRAM in a complex package. Standards will emerge for these MCP’s to drive costs down.
Small geometry and multi-levels NAND
Nand flash geometry is at 32nm going to 22nm. Levels per cell is going from 2 levels to 3 levels and moving towards 4 levels. That means some general purpose USB stick and SD card prices will be reduced. This would generate mass adoption beyond what we have today. Due to the lower reliability of the 3 level and 4 level cells, they will be promoted as the Kodak film equivalent. They will be limited to small re-write cycles or one time usage applications.
During 2010, SSD (solid state drive) will progress but would not take quantum laps. This is due to the inherent write reliability issue. Although advance in controller technology has overcome part of the problem, yet it is still expensive and short of perfection. Enterprise systems will use SSD for storage caching to increase access speed and take advantage of this “mostly read” situation. 
Server Memory will increase
In 2010, server memory configuration will still be the FB-DIMM. More server and greater memory capacity will be demanded by ISP and data operators. On the other hand, they are becoming more power conscious. The new slogan is to drive server and data centers “green”. International standards for power saving servers will be set and adopted.
As in increase in server capacity, “cloud computing” push will resurface. Some corporations will go for terminal link to central computing to avoid software maintenance and cost. The result is a new demand for  low power, high speed and high capacity memory modules designed especially for servers.
We will see DDR3 memory to go from 1.5Volt power supply to 1.35Volt power. The norm for high capacity server modules will be 4 ranks and either 8GB or 16GB per module. DDR3 multi-rank Registered DIMM will fill the demand for first part of 2010. The new LR-DIMM (load reduced DIMM) will surface at the end of the year replacing the FB-DIMM. This LR-DIMM is different from FB-DIMM by the signal input structure. While the FB-DIMM uses serial protocol, the LR-DIMM uses the stable parallel protocol and thus achieves higher operational frequency while maintaining the same memory density.
Conclusion
Although we are just climbing out of a memory depression, the year of 2010 promised to be positive in all aspects. PC volume will grow. Laptops will flourish, Nand flash will find new application while servers will take a new dimension in memory. 2010 is looking to be a good year for the memory industry.
based research firm IDC

Virtualization

Virtualization is “the big story right now” as it pushes a transformation in organizations’ storage infrastructure, where direct-attached storage is gradually giving way to network-attached storage (NAS) and storage area networks (SAN), said Philip Barnes, a storage analyst at Toronto- Canada. Adopting network storage, said Barnes, means organizations can take advantage of the mobility features of virtualization, and the increased resiliency given the disk is no longer associated with a single physical machine.

10 Gigabit Ethernet vs. Fibre Channel

While Fibre Channel has been the undisputed standard of choice as an interconnect in the data center, the arrival of 10 Gigabit Ethernet networks threatens to challenge that, according to John Sloan, an analyst at London, Ontario-based Info-Tech Research Group. While Fibre Channel vendors can still argue that it’s a faster medium for data center storage, Sloan said “that differentiation will become a race” as Ethernet gains in the capacity it didn’t offer before and becomes increasingly affordable. And although organizations will start migrating to Ethernet, Fibre Channel will still have a significant footprint in the data center given prior investments in the technology. “It’s not like they’re going to be throwing out the Fibre Channel overnight. It’s not going to be a revolutionary change.”
Chris Gahagan, senior vice president of resource management software at EMC Corp., said that there will still be a place for Fibre Channel in the data center. For instance Fibre Channel over Ethernet is “a way to preserve the integrity of the Fibre Channel network but running over an Ethernet backbone.”
Gahagan acknowledged that there is an increasing number of use cases for technologies like iSCSI and NAS, and “obviously EMC plays in all of the connectivity spaces,” he added.
More likely, he said, is Fibre Channel will become a protocol converging with other protocols running over IP networks, as has occurred with data, voice and video over IP

3 comments:

  1. thank you..this assignment really helpful...

    ReplyDelete
  2. Thanks A lot ... It was really Helpfull for my Assignment.....!

    it is a request please add some more content about other topics in your blog
    Really Thanks..!

    ReplyDelete