What are the three types of interrupt?

Signal to a computer processor emitted by hardware or software

What are the three types of interrupt?

interrupt sources and processor handling

In digital computers, an interrupt (sometimes referred to as a trap)[1] is a request for the processor to interrupt currently executing code (when permitted), so that the event can be processed in a timely manner. If the request is accepted, the processor will suspend its current activities, save its state, and execute a function called an interrupt handler (or an interrupt service routine, ISR) to deal with the event. This interruption is often temporary, allowing the software to resume[a] normal activities after the interrupt handler finishes, although the interrupt could instead indicate a fatal error.[2]

Interrupts are commonly used by hardware devices to indicate electronic or physical state changes that require time-sensitive attention. Interrupts are also commonly used to implement computer multitasking, especially in real-time computing. Systems that use interrupts in these ways are said to be interrupt-driven.[3]

Types

What are the three types of interrupt?

Interrupt signals may be issued in response to hardware or software events. These are classified as hardware interrupts or software interrupts, respectively. For any particular processor, the number of interrupt types is limited by the architecture.

Hardware interrupts

A hardware interrupt is a condition related to the state of the hardware that may be signaled by an external hardware device, e.g., an interrupt request (IRQ) line on a PC, or detected by devices embedded in processor logic (e.g., the CPU timer in IBM System/370), to communicate that the device needs attention from the operating system (OS)[4] or, if there is no OS, from the "bare-metal" program running on the CPU. Such external devices may be part of the computer (e.g., disk controller) or they may be external peripherals. For example, pressing a keyboard key or moving a mouse plugged into a PS/2 port triggers hardware interrupts that cause the processor to read the keystroke or mouse position.

Hardware interrupts can arrive asynchronously with respect to the processor clock, and at any time during instruction execution. Consequently, all incoming hardware interrupt signals are conditioned by synchronizing them to the processor clock, and acted upon only at instruction execution boundaries.

In many systems, each device is associated with a particular IRQ signal. This makes it possible to quickly determine which hardware device is requesting service, and to expedite servicing of that device.

On some older systems, such as the 1964 CDC 3600,[5] all interrupts went to the same location, and the OS used a specialized instruction to determine the highest-priority outstanding unmasked interrupt. On contemporary systems, there is generally a distinct interrupt routine for each type of interrupt (or for each interrupt source), often implemented as one or more interrupt vector tables.

Masking

To mask an interrupt is to disable it, while to unmask an interrupt is to enable it.[6]

Processors typically have an internal interrupt mask register,[b] which allows selective enabling[2] (and disabling) of hardware interrupts. Each interrupt signal is associated with a bit in the mask register. On some systems, the interrupt is enabled when the bit is set, and disabled when the bit is clear. On others, the reverse is true, and a set bit disables the interrupt. When the interrupt is disabled, the associated interrupt signal may be ignored by the processor, or it may remain pending. Signals which are affected by the mask are called maskable interrupts.

Some interrupt signals are not affected by the interrupt mask and therefore cannot be disabled; these are called non-maskable interrupts (NMIs). These indicate high-priority events which cannot be ignored under any circumstances, such as the timeout signal from a watchdog timer.

Spurious interrupts

A spurious interrupt is a hardware interrupt for which no source can be found. The term "phantom interrupt" or "ghost interrupt" may also used to describe this phenomenon. Spurious interrupts tend to be a problem with a wired-OR interrupt circuit attached to a level-sensitive processor input. Such interrupts may be difficult to identify when a system misbehaves.

In a wired-OR circuit, parasitic capacitance charging/discharging through the interrupt line's bias resistor will cause a small delay before the processor recognizes that the interrupt source has been cleared. If the interrupting device is cleared too late in the interrupt service routine (ISR), there won't be enough time for the interrupt circuit to return to the quiescent state before the current instance of the ISR terminates. The result is the processor will think another interrupt is pending, since the voltage at its interrupt request input will be not high or low enough to establish an unambiguous internal logic 1 or logic 0. The apparent interrupt will have no identifiable source, hence the "spurious" moniker.

A spurious interrupt may also be the result of electrical anomalies due to faulty circuit design, high noise levels, crosstalk, timing issues, or more rarely, device errata.[7]

A spurious interrupt may result in system deadlock or other undefined operation if the ISR doesn't account for the possibility of such an interrupt occurring. As spurious interrupts are mostly a problem with wired-OR interrupt circuits, good programming practice in such systems is for the ISR to check all interrupt sources for activity and take no action (other than possibly logging the event) if none of the sources is interrupting.

Software interrupts

A software interrupt is requested by the processor itself upon executing particular instructions or when certain conditions are met. Every software interrupt signal is associated with a particular interrupt handler.

A software interrupt may be intentionally caused by executing a special instruction which, by design, invokes an interrupt when executed.[c] Such instructions function similarly to subroutine calls and are used for a variety of purposes, such as requesting operating system services and interacting with device drivers (e.g., to read or write storage media). Software interrupts may also be triggered by program execution errors or by the virtual memory system.

Typically, the operating system kernel will catch and handle such interrupts. Some interrupts are handled transparently to the program - for example, the normal resolution of a page fault is to make the required page accessible in physical memory. But in other cases such as a segmentation fault the operating system executes a process callback. On Unix-like operating systems this involves sending a signal such as SIGSEGV, SIGBUS, SIGILL or SIGFPE, which may either call a signal handler or execute a default action (terminating the program). On Windows the callback is made using Structured Exception Handling with an exception code such as STATUS_ACCESS_VIOLATION or STATUS_INTEGER_DIVIDE_BY_ZERO.[8]

In a kernel process, it is often the case that some types of software interrupts are not supposed to happen. If they occur nonetheless, an operating system crash may result.

Terminology

The terms interrupt, trap, exception, fault, and abort are used to distinguish types of interrupts, although "there is no clear consensus as the exact meaning of these terms".[9] The term trap may refer to any interrupt, to any software interrupt, to any synchronous software interrupt, or only to interrupts caused by instructions with trap in their names. In some usages, the term trap refers specifically to a breakpoint intended to initiate a context switch to a monitor program or debugger.[1] It may also refer to a synchronous interrupt caused by an exceptional condition (e.g., division by zero, invalid memory access, illegal opcode),[9] although the term exception is more common for this.

x86 divides interrupts into (hardware) interrupts and software exceptions, and identifies three types of exceptions: faults, traps, and aborts.[10][11] (Hardware) interrupts are interrupts triggered asynchronously by an I/O device, and allow the program to be restarted with no loss of continuity.[10] A fault is restartable as well but is tied to the synchronous execution of an instruction - the return address points to the faulting instruction. A trap is similar to a fault except that the return address points to the instruction to be executed after the trapping instruction;[12] one prominent use is to implement system calls.[11] An abort is used for severe errors, such as hardware errors and illegal values in system tables, and often[d] does not allow a restart of the program.[12]

ARM uses the term exception to refer to all types of interrupts,[13] and divides exceptions into (hardware) interrupts, aborts, reset, and exception-generating instructions. Aborts correspond to x86 exceptions and may be prefetch aborts (failed instruction fetches) or data aborts (failed data accesses), and may be synchronous or asynchronous. Asynchronous aborts may be precise or imprecise. MMU aborts (page faults) are synchronous.[14]

Triggering methods

Each interrupt signal input is designed to be triggered by either a logic signal level or a particular signal edge (level transition). Level-sensitive inputs continuously request processor service so long as a particular (high or low) logic level is applied to the input. Edge-sensitive inputs react to signal edges: a particular (rising or falling) edge will cause a service request to be latched; the processor resets the latch when the interrupt handler executes.

Level-triggered

A level-triggered interrupt is requested by holding the interrupt signal at its particular (high or low) active logic level. A device invokes a level-triggered interrupt by driving the signal to and holding it at the active level. It negates the signal when the processor commands it to do so, typically after the device has been serviced.

The processor samples the interrupt input signal during each instruction cycle. The processor will recognize the interrupt request if the signal is asserted when sampling occurs.

Level-triggered inputs allow multiple devices to share a common interrupt signal via wired-OR connections. The processor polls to determine which devices are requesting service. After servicing a device, the processor may again poll and, if necessary, service other devices before exiting the ISR.

Edge-triggered

An edge-triggered interrupt is an interrupt signaled by a level transition on the interrupt line, either a falling edge (high to low) or a rising edge (low to high). A device wishing to signal an interrupt drives a pulse onto the line and then releases the line to its inactive state. If the pulse is too short to be detected by polled I/O then special hardware may be required to detect it. The important part of edge triggering is that if (say) the interrupt was triggered by a high to low edge transition, that if the level remained low it would not trigger a further interrupt. It must return to the high level before falling again in order to trigger a further interrupt. This contrasts with a level trigger where the low level would continue to create interrupts (if they are enabled) until the signal returns to its high level.

Computers with edge-triggered interrupts may include an interrupt register that retains the status of pending interrupts. Systems with interrupt registers generally have interrupt mask registers as well.

Processor response

The processor samples the interrupt trigger signals or interrupt register during each instruction cycle, and will process the highest priority enabled interrupt found. Regardless of the triggering method, the processor will begin interrupt processing at the next instruction boundary following a detected trigger, thus ensuring:

  • The processor status[e] is saved in a known manner. Typically the status is stored in a known location, but on some systems it is stored on a stack.
  • All instructions before the one pointed to by the PC have fully executed.
  • No instruction beyond the one pointed to by the PC has been executed, or any such instructions are undone before handling the interrupt.
  • The execution state of the instruction pointed to by the PC is known.

System implementation

Interrupts may be implemented in hardware as a distinct component with control lines, or they may be integrated into the memory subsystem[citation needed].

If implemented in hardware as a distinct component, an interrupt controller circuit such as the IBM PC's Programmable Interrupt Controller (PIC) may be connected between the interrupting device and the processor's interrupt pin to multiplex several sources of interrupt onto the one or two CPU lines typically available. If implemented as part of the memory controller, interrupts are mapped into the system's memory address space.

Shared IRQs

Multiple devices may share an edge-triggered interrupt line if they are designed to. The interrupt line must have a pull-down or pull-up resistor so that when not actively driven it settles to its inactive state, which is the default state of it. Devices signal an interrupt by briefly driving the line to its non-default state, and let the line float (do not actively drive it) when not signaling an interrupt. This type of connection is also referred to as open collector. The line then carries all the pulses generated by all the devices. (This is analogous to the pull cord on some buses and trolleys that any passenger can pull to signal the driver that they are requesting a stop.) However, interrupt pulses from different devices may merge if they occur close in time. To avoid losing interrupts the CPU must trigger on the trailing edge of the pulse (e.g. the rising edge if the line is pulled up and driven low). After detecting an interrupt the CPU must check all the devices for service requirements.

Edge-triggered interrupts do not suffer the problems that level-triggered interrupts have with sharing. Service of a low-priority device can be postponed arbitrarily, while interrupts from high-priority devices continue to be received and get serviced. If there is a device that the CPU does not know how to service, which may raise spurious interrupts, it won't interfere with interrupt signaling of other devices. However, it is easy for an edge-triggered interrupt to be missed - for example, when interrupts are masked for a period - and unless there is some type of hardware latch that records the event it is impossible to recover. This problem caused many "lockups" in early computer hardware because the processor did not know it was expected to do something. More modern hardware often has one or more interrupt status registers that latch interrupts requests; well-written edge-driven interrupt handling code can check these registers to ensure no events are missed.

The elderly Industry Standard Architecture (ISA) bus uses edge-triggered interrupts, without mandating that devices be able to share IRQ lines, but all mainstream ISA motherboards include pull-up resistors on their IRQ lines, so well-behaved ISA devices sharing IRQ lines should just work fine. The parallel port also uses edge-triggered interrupts. Many older devices assume that they have exclusive use of IRQ lines, making it electrically unsafe to share them.

There are 3 ways multiple devices "sharing the same line" can be raised. First is by exclusive conduction (switching) or exclusive connection (to pins). Next is by bus (all connected to the same line listening): cards on a bus must know when they are to talk and not talk (i.e., the ISA bus). Talking can be triggered in two ways: by accumulation latch or by logic gates. Logic gates expect a continual data flow that is monitored for key signals. Accumulators only trigger when the remote side excites the gate beyond a threshold, thus no negotiated speed is required. Each has its speed versus distance advantages. A trigger, generally, is the method in which excitation is detected: rising edge, falling edge, threshold (oscilloscope can trigger a wide variety of shapes and conditions).

Triggering for software interrupts must be built into the software (both in OS and app). A 'C' app has a trigger table (a table of functions) in its header, which both the app and OS know of and use appropriately that is not related to hardware. However do not confuse this with hardware interrupts which signal the CPU (the CPU enacts software from a table of functions, similarly to software interrupts).

Difficulty with sharing interrupt lines

Multiple devices sharing an interrupt line (of any triggering style) all act as spurious interrupt sources with respect to each other. With many devices on one line, the workload in servicing interrupts grows in proportion to the square of the number of devices. It is therefore preferred to spread devices evenly across the available interrupt lines. Shortage of interrupt lines is a problem in older system designs where the interrupt lines are distinct physical conductors. Message-signaled interrupts, where the interrupt line is virtual, are favored in new system architectures (such as PCI Express) and relieve this problem to a considerable extent.

Some devices with a poorly designed programming interface provide no way to determine whether they have requested service. They may lock up or otherwise misbehave if serviced when they do not want it. Such devices cannot tolerate spurious interrupts, and so also cannot tolerate sharing an interrupt line. ISA cards, due to often cheap design and construction, are notorious for this problem. Such devices are becoming much rarer, as hardware logic becomes cheaper and new system architectures mandate shareable interrupts.

Hybrid

Some systems use a hybrid of level-triggered and edge-triggered signaling. The hardware not only looks for an edge, but it also verifies that the interrupt signal stays active for a certain period of time.

A common use of a hybrid interrupt is for the NMI (non-maskable interrupt) input. Because NMIs generally signal major – or even catastrophic – system events, a good implementation of this signal tries to ensure that the interrupt is valid by verifying that it remains active for a period of time. This 2-step approach helps to eliminate false interrupts from affecting the system.

Message-signaled

A message-signaled interrupt does not use a physical interrupt line. Instead, a device signals its request for service by sending a short message over some communications medium, typically a computer bus. The message might be of a type reserved for interrupts, or it might be of some pre-existing type such as a memory write.

Message-signalled interrupts behave very much like edge-triggered interrupts, in that the interrupt is a momentary signal rather than a continuous condition. Interrupt-handling software treats the two in much the same manner. Typically, multiple pending message-signaled interrupts with the same message (the same virtual interrupt line) are allowed to merge, just as closely spaced edge-triggered interrupts can merge.

Message-signalled interrupt vectors can be shared, to the extent that the underlying communication medium can be shared. No additional effort is required.

Because the identity of the interrupt is indicated by a pattern of data bits, not requiring a separate physical conductor, many more distinct interrupts can be efficiently handled. This reduces the need for sharing. Interrupt messages can also be passed over a serial bus, not requiring any additional lines.

PCI Express, a serial computer bus, uses message-signaled interrupts exclusively.

Doorbell

In a push button analogy applied to computer systems, the term doorbell or doorbell interrupt is often used to describe a mechanism whereby a software system can signal or notify a computer hardware device that there is some work to be done. Typically, the software system will place data in some well-known and mutually agreed upon memory locations, and "ring the doorbell" by writing to a different memory location. This different memory location is often called the doorbell region, and there may even be multiple doorbells serving different purposes in this region. It is this act of writing to the doorbell region of memory that "rings the bell" and notifies the hardware device that the data are ready and waiting. The hardware device would now know that the data are valid and can be acted upon. It would typically write the data to a hard disk drive, or send them over a network, or encrypt them, etc.

The term doorbell interrupt is usually a misnomer. It is similar to an interrupt, because it causes some work to be done by the device; however, the doorbell region is sometimes implemented as a polled region, sometimes the doorbell region writes through to physical device registers, and sometimes the doorbell region is hardwired directly to physical device registers. When either writing through or directly to physical device registers, this may cause a real interrupt to occur at the device's central processor unit (CPU), if it has one.

Doorbell interrupts can be compared to Message Signaled Interrupts, as they have some similarities.

Multiprocessor IPI

In multiprocessor systems, a processor may send an interrupt request to another processor via inter-processor interrupts[f] (IPI).

Performance

Interrupts provide low overhead and good latency at low load, but degrade significantly at high interrupt rate unless care is taken to prevent several pathologies. The phenomenon where the overall system performance is severely hindered by excessive amounts of processing time spent handling interrupts is called an interrupt storm.

There are various forms of livelocks, when the system spends all of its time processing interrupts to the exclusion of other required tasks. Under extreme conditions, a large number of interrupts (like very high network traffic) may completely stall the system. To avoid such problems, an operating system must schedule network interrupt handling as carefully as it schedules process execution.[15]

With multi-core processors, additional performance improvements in interrupt handling can be achieved through receive-side scaling (RSS) when multiqueue NICs are used. Such NICs provide multiple receive queues associated to separate interrupts; by routing each of those interrupts to different cores, processing of the interrupt requests triggered by the network traffic received by a single NIC can be distributed among multiple cores. Distribution of the interrupts among cores can be performed automatically by the operating system, or the routing of interrupts (usually referred to as IRQ affinity) can be manually configured.[16][17]

A purely software-based implementation of the receiving traffic distribution, known as receive packet steering (RPS), distributes received traffic among cores later in the data path, as part of the interrupt handler functionality. Advantages of RPS over RSS include no requirements for specific hardware, more advanced traffic distribution filters, and reduced rate of interrupts produced by a NIC. As a downside, RPS increases the rate of inter-processor interrupts (IPIs). Receive flow steering (RFS) takes the software-based approach further by accounting for application locality; further performance improvements are achieved by processing interrupt requests by the same cores on which particular network packets will be consumed by the targeted application.[16][18][19]

Typical uses

Interrupts are commonly used to service hardware timers, transfer data to and from storage (e.g., disk I/O) and communication interfaces (e.g., UART, Ethernet), handle keyboard and mouse events, and to respond to any other time-sensitive events as required by the application system. Non-maskable interrupts are typically used to respond to high-priority requests such as watchdog timer timeouts, power-down signals and traps.

Hardware timers are often used to generate periodic interrupts. In some applications, such interrupts are counted by the interrupt handler to keep track of absolute or elapsed time, or used by the OS task scheduler to manage execution of running processes, or both. Periodic interrupts are also commonly used to invoke sampling from input devices such as analog-to-digital converters, incremental encoder interfaces, and GPIO inputs, and to program output devices such as digital-to-analog converters, motor controllers, and GPIO outputs.

A disk interrupt signals the completion of a data transfer from or to the disk peripheral; this may cause a process to run which is waiting to read or write. A power-off interrupt predicts imminent loss of power, allowing the computer to perform an orderly shut-down while there still remains enough power to do so. Keyboard interrupts typically cause keystrokes to be buffered so as to implement typeahead.

Interrupts are sometimes used to emulate instructions which are unimplemented on some computers in a product family.[20] For example floating point instructions may be implemented in hardware on some systems and emulated on lower-cost systems. In the latter case, execution of an unimplemented floating point instruction will cause an "illegal instruction" exception interrupt. The interrupt handler will implement the floating point function in software and then return to the interrupted program as if the hardware-implemented instruction had been executed.[21] This provides application software portability across the entire line.

Interrupts are similar to signals, the difference being that signals are used for inter-process communication (IPC), mediated by the kernel (possibly via system calls) and handled by processes, while interrupts are mediated by the processor and handled by the kernel. The kernel may pass an interrupt as a signal to the process that caused it (typical examples are SIGSEGV, SIGBUS, SIGILL and SIGFPE).

History

Hardware interrupts were introduced as an optimization, eliminating unproductive waiting time in polling loops, waiting for external events. The first system to use this approach was the DYSEAC, completed in 1954, although earlier systems provided error trap functions.[22]

The UNIVAC 1103A computer is generally credited with the earliest use of interrupts in 1953.[23][24] Earlier, on the UNIVAC I (1951) "Arithmetic overflow either triggered the execution of a two-instruction fix-up routine at address 0, or, at the programmer's option, caused the computer to stop." The IBM 650 (1954) incorporated the first occurrence of interrupt masking. The National Bureau of Standards DYSEAC (1954) was the first to use interrupts for I/O. The IBM 704 was the first to use interrupts for debugging, with a "transfer trap", which could invoke a special routine when a branch instruction was encountered. The MIT Lincoln Laboratory TX-2 system (1957) was the first to provide multiple levels of priority interrupts.[24]

See also

  • What are the three types of interrupt?
    Electronics portal

  • Advanced Programmable Interrupt Controller (APIC)
  • BIOS interrupt call
  • Event-driven programming
  • Exception handling
  • INT (x86 instruction)
  • Interrupt coalescing
  • Interrupt handler
  • Interrupt latency
  • Interrupts in 65xx processors
  • Ralf Brown's Interrupt List
  • Interrupts on IBM System/360 architecture
  • Time-triggered system
  • Autonomous peripheral operation

Notes

  1. ^ The operating system might resume the interrupted process or might switch to a different process.
  2. ^ The mask register may be a single register or multiple registers, e.g., bits in the PSW and other bits in control registers.
  3. ^ See INT (x86 instruction)
  4. ^ Some operating systems can recover from severe errors, e.g., paging in a page from a paging file after an uncorrectable ECC error in an unaltered page.
  5. ^ This might be just the Program Counter (PC), a PSW or multiple registers.
  6. ^ Known as shoulder taps on some IBM operating systems.

References

  1. ^ a b "The Jargon File, version 4.4.7". 2003-10-27. Retrieved 20 January 2022.
  2. ^ a b Jonathan Corbet; Alessandro Rubini; Greg Kroah-Hartman (2005). "Linux Device Drivers, Third Edition, Chapter 10. Interrupt Handling" (PDF). O'Reilly Media. p. 269. Retrieved December 25, 2014. Then it's just a matter of cleaning up, running software interrupts, and getting back to regular work. The "regular work" may well have changed as a result of an interrupt (the handler could wake_up a process, for example), so the last thing that happens on return from an interrupt is a possible rescheduling of the processor.
  3. ^ Rosenthal, Scott (May 1995). "Basics of Interrupts". Archived from the original on 2016-04-26. Retrieved 2010-11-11.
  4. ^ "Hardware interrupts". Retrieved 2014-02-09.
  5. ^ "Interrupt Instructions". Control Data 3600 Computer System Reference Manual (PDF). Control Data Corporation. July 1964. pp. 4–6. 60021300.
  6. ^ Bai, Ying (2017). Microcontroller Engineering with MSP432: Fundamentals and Applications. CRC Press. p. 21. ISBN 978-1-4987-7298-3. LCCN 2016020120. In Cortex-M4 system, the interrupts and exceptions have the following properties: ... Generally, a single bit in a mask register is used to mask (disable) or unmask (enable) certain interrupt/exceptions to occur
  7. ^ Li, Qing; Yao, Caroline (2003). Real-Time Concepts for Embedded Systems. CRC Press. p. 163. ISBN 1482280825.
  8. ^ "Hardware exceptions". docs.microsoft.com. 3 August 2021.
  9. ^ a b Hyde, Randall (1996). "Chapter Seventeen: Interrupts, Traps and Exceptions (Part 1)". The Art Of Assembly Language Programming. Retrieved 22 December 2021. The concept of an interrupt is something that has expanded in scope over the years. The 80x86 family has only added to the confusion surrounding interrupts by introducing the int (software interrupt) instruction. Indeed different manufacturers have used terms like exceptions faults aborts traps and interrupts to describe the phenomena this chapter discusses. Unfortunately there is no clear consensus as to the exact meaning of these terms. Different authors adopt different terms to their own use.
  10. ^ a b "Intel® 64 and IA-32 Architectures Software Developer's Manual Volume 1: Basic Architecture". pp. 6–12 Vol. 1. Retrieved 22 December 2021.
  11. ^ a b Bryant, Randal E.; O’Hallaron, David R. (2016). "8.1.2 Classes of exceptions". Computer systems: a programmer's perspective (Third, Global ed.). Harlow. ISBN 1-292-10176-8.
  12. ^ a b "Intel® 64 and IA-32 architectures software developer's manual volume 3A: System programming guide, part 1". p. 6-5 Vol. 3A. Retrieved 22 December 2021.
  13. ^ "Exception Handling". developer.arm.com. ARM Cortex-A Series Programmer's Guide for ARMv7-A. Retrieved 21 January 2022.
  14. ^ "Types of exception". developer.arm.com. ARM Cortex-A Series Programmer's Guide for ARMv7-A. Retrieved 22 December 2021.
  15. ^ Mogul, Jeffrey C.; Ramakrishnan, K. K. (1997). "Eliminating receive livelock in an interrupt-driven kernel". ACM Transactions on Computer Systems. 15 (3): 217–252. doi:10.1145/263326.263335. S2CID 215749380. Retrieved 2010-11-11.
  16. ^ a b Tom Herbert; Willem de Bruijn (May 9, 2014). "Documentation/networking/scaling.txt". Linux kernel documentation. kernel.org. Retrieved November 16, 2014.
  17. ^ "Intel 82574 Gigabit Ethernet Controller Family Datasheet" (PDF). Intel. June 2014. p. 1. Retrieved November 16, 2014.
  18. ^ Jonathan Corbet (November 17, 2009). "Receive packet steering". LWN.net. Retrieved November 16, 2014.
  19. ^ Jake Edge (April 7, 2010). "Receive flow steering". LWN.net. Retrieved November 16, 2014.
  20. ^ Thusoo, Shalesh; et al. "Patent US 5632028 A". Google Patents. Retrieved Aug 13, 2017.
  21. ^ Altera Corporation (2009). Nios II Processor Reference (PDF). p. 4. Retrieved Aug 13, 2017.
  22. ^ Codd, Edgar F. "Multiprogramming". Advances in Computers. 3: 82.
  23. ^ Bell, C. Gordon; Newell, Allen (1971). Computer structures: readings and examples. McGraw-Hill. p. 46. ISBN 9780070043572. Retrieved Feb 18, 2019.
  24. ^ a b Smotherman, Mark. "Interrupts". Retrieved 22 December 2021.

What are the three types of interrupt?

  • Interrupts Made Easy
  • Interrupts for Microchip PIC Microcontroller
  • IBM PC Interrupt Table
  • University of Alberta CMPUT 296 Concrete Computing Notes on Interrupts, archived from the original on March 13, 2012

Retrieved from "https://en.wikipedia.org/w/index.php?title=Interrupt&oldid=1096479280"


Page 2

Second generation of personal computers by IBM

What are the three types of interrupt?
Personal System/2
What are the three types of interrupt?

An assortment of PS/2s in various form factors[a]

Also known asPS/2DeveloperInternational Business Machines Corporation (IBM)ManufacturerIBMTypePersonal computersRelease dateApril 1987; 35 years ago (1987-04)DiscontinuedJuly 1995[1]Media

  • 3.5-inch floppy disks
  • 5.25-inch floppy disks (optional, external drive)

Operating system

  • IBM PC DOS
  • OS/2
  • Windows 2.0x
  • Windows 3.0
  • Windows 3.1x
  • Windows NT 3.1
  • Windows NT 3.5

CPUVarious; see list of modelsGraphicsVGAPower120/240 VAC ~ (desktops)PredecessorPersonal Computer/ATSuccessor

  • PC Series (desktops)
  • ThinkPad (portables)

Related

  • PS/1
  • PS/ValuePoint
  • Ambra

The Personal System/2 or PS/2 is IBM's second generation[2][3] of personal computers. Released in 1987, it officially replaced the IBM PC, XT, AT, and PC Convertible in IBM's lineup. Many of the PS/2's innovations, such as the 16550 UART (serial port), 1440 KB 3.5-inch floppy disk format, 72-pin SIMMs, the PS/2 port, and the VGA video standard, went on to become standards in the broader PC market.[4][5]

The PS/2 line was created by IBM partly in an attempt to recapture control of the PC market by introducing the advanced yet proprietary Micro Channel architecture (MCA) on higher-end models. These models were in the strange position of being incompatible with the IBM-compatible hardware standards previously established by IBM and adopted in the PC industry. However, IBM's initial PS/2 computers were popular with target market corporate buyers, and by September 1988 IBM reported that it had sold 3 million PS/2 machines. This was only 18 months after the new range had been introduced.

Most major PC manufacturers balked at IBM's licensing terms for MCA-compatible hardware, particularly the per-machine royalties. In 1992, Macworld stated that "IBM lost control of its own market and became a minor player with its own technology."[6]

The OS/2 operating system was announced at the same time as the PS/2 line and was intended to be the primary operating system for models with Intel 80286 or later processors. However, at the time of the first shipments, only IBM PC DOS 3.3 was available. OS/2 1.0 (text-mode only) and Microsoft's Windows 2.0 became available several months later. IBM also released AIX PS/2, a UNIX operating system for PS/2 models with Intel 386 or later processors.

Technology

Predecessors to the PS/2
Name Year
IBM Personal Computer 1981
IBM Personal Computer XT 1983
IBM Portable Personal Computer 1984
IBM PCjr 1984
IBM Personal Computer/AT 1984
IBM PC Convertible 1986
IBM Personal Computer XT 286 1986

IBM's PS/2 was designed to remain software compatible with their PC/AT/XT line of computers upon which the large PC clone market was built, but the hardware was quite different. PS/2 had two BIOSes: one named ABIOS (Advanced BIOS) which provided a new protected mode interface and was used by OS/2, and CBIOS (Compatible BIOS) which was included to be software compatible with the PC/AT/XT. CBIOS was so compatible that it even included Cassette BASIC. While IBM did not publish the BIOS source code, it did promise to publish BIOS entry points.[7]

Micro Channel architecture

With certain models to the IBM PS/2 line, Micro Channel Architecture (MCA) was also introduced.[7] MCA was conceptually similar to the channel architecture of the IBM System/360 mainframes. MCA was technically superior to ISA and allowed for higher speed communications within the system. The majority of MCA's features would be seen in later buses with the exception of: streaming-data procedure, channel-check reporting, error logging[8] and internal bus-level video pass-through for devices like the IBM 8514. Transfer speeds were on par with the much later PCI standard. MCA allowed one-to-one, card to card, and multi-card to processor simultaneous transaction management which is a feature of the PCI-X bus format.

Bus mastering capability, bus arbitration, and a primitive form of plug-and-play management of hardware were all benefits of MCA. Gilbert Held in his 2000 book Server Management observes: "MCA used an early (and user-hostile) version of what we know now as 'Plug-N′-Play', requiring a special setup disk for each machine and each card."[9] MCA never gained wide acceptance outside of the PS/2.

When setting up the card with its disk, all choices for interrupts and other changes were accomplished automatically by the PC reading the old configuration from the floppy disk. This made necessary changes, then recorded the new configuration to the floppy disk. This meant that the user must keep that same floppy disk matched to that particular PC. For a small organization with a few PCs, this was annoying, but less expensive and time consuming than bringing in a PC technician to do installation. But for large organizations with hundreds or even thousands of PCs, permanently matching each PC with its own floppy disk was a logistical nightmare. Without the original, (and correctly updated) floppy disk, no changes could be made to the PC's cards.

In addition to the technical setup, legally, royalties were required for each MCA-compatible machine sold. There was nothing unique in IBM insisting on payment of royalties on the use of its patents applied to Micro Channel-based machines. Up until that time, some companies had failed to pay IBM for the use of its patents on the earlier generation of Personal Computer.[citation needed]

Keyboard/mouse

Layout

The PS/2 IBM Model M keyboard used the same 101-key layout of the previous IBM PC/AT Extended keyboard, itself derived from the original IBM PC keyboard.[7] European variants had 102 keys with the addition of an extra key to the right of the left Shift key.

Interface

What are the three types of interrupt?

The original IBM PS/2 mouse

What are the three types of interrupt?

PS/2 connection ports (later colored purple for keyboard and green for mouse, according to PC 97) were once commonly used for connecting input devices.

PS/2 systems introduced a new specification for the keyboard and mouse interfaces, which are still in use today (though increasingly supplanted by USB devices) and are thus called "PS/2" interfaces. The PS/2 keyboard interface, inspired by Apple's ADB interface, was electronically identical to the long-established AT interface, but the cable connector was changed from the 5-pin DIN connector to the smaller 6-pin mini-DIN interface. The same connector and a similar synchronous serial interface was used for the PS/2 mouse port.

The initial desktop Model 50 and Model 70 also featured a new cableless internal design, based on use of interposer circuit boards to link the internal drives to the planar (motherboard). Additionally these machines could be largely disassembled and reassembled for service without tools.

Additionally, the PS/2 introduced a new software data area known as the Extended BIOS Data Area (EBDA). Its primary use was to add a new buffer area for the dedicated mouse port. This also required making a change to the "traditional" BIOS Data Area (BDA) which was then required to point to the base address of the EBDA.

Another new PS/2 innovation was the introduction of bidirectional parallel ports which in addition to their traditional use for connecting a printer could now function as a high speed data transfer interface. This allowed the use of new hardware such as parallel port scanners, CD-ROM drives, and also enhanced the capabilities of printers by allowing them to communicate with the host PC and send back signals instead of simply being a passive output device.

Graphics

Most of the initial range of PS/2 models were equipped with a new frame buffer known as the Video Graphics Array, or VGA for short. This effectively replaced the previous EGA standard.[7] VGA increased graphics memory to 256 KB and provided for resolutions of 640×480 with 16 colors, and 320×200 with 256 colors. VGA also provided a palette of 262,144 colors (as opposed to the EGA palette of 64 colors). The IBM 8514 and later XGA computer display standards were also introduced on the PS/2 line.

Key monitors and their maximum resolutions:

  • 8504: 12″, 640×480, 60 Hz non-interlaced, 1991, monochrome
  • 8507: 19″, 1024×768, 43.5 Hz interlaced, 1988, monochrome
  • 8511: 14″, 640×480, 60 Hz non-interlaced, 1987
  • 8512: 14″, 640×480, 60 Hz non-interlaced, 1987
  • 8513: 12″, 640×480, 60 Hz non-interlaced, 1987
  • 8514: 16″, 1024×768, 43.5 Hz interlaced, 1987
  • 8515: 14″, 1024×768, 43.5 Hz interlaced, 1991
  • 8516: 14″, 1024×768, 43.5 Hz interlaced, 1991
  • 8518: 14″, 640×480, 75 Hz non-interlaced, 1992
  • 9515: 14″, 1024×768, 43.5 Hz interlaced, 1992
  • 9517: 16″, 1280×1024, 53 Hz interlaced, 1991
  • 9518: 14″, 640×480, non-interlaced, 1992
  • 38F4737: 10", 640×480, non-interlaced, 1989, amber monochrome plasma screen; this display was exclusive to models P70 and P75

In truth, all "XGA" 1024×768 monitors are multimode, as XGA works as an add-on card to a built-in VGA and transparently passes-thru the VGA signal when not operating in a high resolution mode. All of the listed 85xx displays can therefore sync 640×480 at 60 Hz (or 720×400 at 70 Hz) in addition to any higher mode they may also be capable of. This however is not true of the 95xx models (and some unlisted 85xx's), which are specialist workstation displays designed for use with the XGA-2 or Image Adapter/A cards, and whose fixed frequencies all exceed that of basic VGA – the lowest of their commonly available modes instead being 640×480 at 75 Hz, if not something much higher still. It is also worth noting that these were still merely dual- or "multiple-frequency" monitors, not variable-frequency (also known as multisync); in particular, despite running happily at 640×480/720×400 and 1024×768, an (e.g.) 8514 cannot sync the otherwise common intermediate 800×600 "SVGA" resolution, even at the relatively low 50 to 56 Hz refresh rates initially used.

Although the design of these adapters did not become an industry standard as VGA did, their 1024×768 pixel resolution was subsequently widely adopted as a standard by other manufacturers, and "XGA" became a synonym for this screen resolution. The lone exception were the bottom-rung 8086-based Model 25 and 30, which had a cut-down version of VGA referred to as MCGA; the 286 models came with VGA. This supported CGA graphics modes, VGA 320x200x256 and 640x480x2 mode, but not EGA or color 640x480.

What are the three types of interrupt?

MCA IBM XGA-2 Graphics Card

VGA video connector

All of the new PS/2 graphics systems (whether MCGA, VGA, 8514, or later XGA) used a 15-pin D-sub connector for video out. This used analog RGB signals, rather than four or six digital color signals as on previous CGA and EGA monitors. The digital signals limited the color gamut to a fixed 16 or 64 color palette with no room for expansion. In contrast, any color depth (bits per primary) can be encoded into the analog RGB signals so the color gamut can be increased arbitrarily by using wider (more bits per sample) DACs and a more sensitive monitor. The connector was also compatible with analog grayscale displays. Unlike earlier systems (like MDA and Hercules) this was transparent to software, so all programs supporting the new standards could run unmodified whichever type of display was attached. (On the other hand, whether the display was color or monochrome was undetectable to software, so selection between application displays optimized for color or monochrome, in applications that supported both, required user intervention.) These grayscale displays were relatively inexpensive during the first few years the PS/2 was available, and they were very commonly purchased with lower-end models.

The VGA connector became the de facto standard for connecting monitors and projectors on both PC and non-PC hardware over the course of the early 1990s, replacing a variety of earlier connectors.

Storage

What are the three types of interrupt?

Some PS/2 models used a quick-attachment socket on the back of the floppy drive which is incompatible with a standard 5.25" floppy connector.

What are the three types of interrupt?

Close-up of unusual 72-pin MCA internal hard drive connector

Apple had first popularized the 3.5" floppy on the Macintosh line and IBM brought them to the PC in 1986 with the PC Convertible. In addition, they could be had as an optional feature on the XT and AT. The PS/2 line used entirely 3.5" drives which assisted in their quick adoption by the industry, although the lack of 5.25" drive bays in the computers created problems later on in the 1990s as they could not accommodate internal CD-ROM drives. In addition, the lack of built-in 5.25" floppy drives meant that PS/2 users could not immediately run the large body of existing IBM compatible software.[10] However IBM made available optional external 5.25" drives, with internal adapters for the early PS/2 models, to enable data transfer.

What are the three types of interrupt?

3.5" DD and HD floppies

In the initial lineup, IBM used 720 KB double density (DD) capacity drives on the 8086-based models and 1440 KB high density (HD) on the 80286-based and higher models. By the end of the PS/2 line they had moved to a somewhat standardized capacity of 2880 KB.

The PS/2 floppy drives lacked a capacity detector. 1440 KB floppies had a hole so that drives could identify them from 720 KB floppies, preventing users from formatting the smaller capacity disks to the higher capacity (doing so would work, but with a higher tendency of data loss). Clone manufacturers implemented the hole detection, but IBM did not. As a result of this a 720 KB floppy could be formatted to 1440 KB in a PS/2, but the resulting floppy would only be readable by a PS/2 machine.[11]

PS/2s primarily used Mitsubishi floppy drives and did not use a separate Molex power connector; the data cable also contained the power supply lines. As the hardware aged the drives often malfunctioned due to bad quality capacitors.[citation needed]

The PS/2 used several different types of internal hard drives. Early models used MFM or ESDI drives. Some desktop models used combo power/data cables similar to the floppy drives. Later models used DBA ESDI or Parallel SCSI. Typically, desktop PS/2 models only permitted use of one hard drive inside the computer case. Additional storage could be attached externally using the optional SCSI interface.

Memory

Later PS/2 models introduced the 72-pin SIMM[12] which became the de facto standard for RAM modules by the mid-1990s in mid-to-late 486 and nearly all Pentium desktop systems. 72-pin SIMMs were 32/36 bits wide and replaced the old 30-pin SIMM (8/9-bit) standard. The older SIMMs were much less convenient because they had to be installed in sets of two or four to match the width of the CPU's 16-bit (Intel 80286 and 80386SX) or 32-bit (80386 and 80486) data bus, and would have been extremely inconvenient to use in Pentium systems (which featured a 64-bit memory bus). 72-pin SIMMs were also made with greater capacities (starting at 1mb and ultimately reaching 128 MB, vs 256 KB to 16 MB and more commonly no more than 4 MB for 30-pin) and in a more finely graduated range (powers of 2, instead of powers of 4).

Many PS/2 models also used proprietary IBM SIMMs and could not be fitted with commonly available types. However industry standard SIMMs could be modified to work in PS/2 machines if the SIMM-presence and SIMM-type detection bridges, or associated contacts, were correctly rewired.[citation needed]

Models

At launch, the PS/2 family comprised the Model 30, 50, 60 and 80;[7] the Model 25 was launched a few months later.

IBM Personal System/2 Model 30 286. Power-on self-test, bootstrapping, power-off

The PS/2 Models 25 and 30 (IBM 8525 and 8530, respectively) were the lowest-end models in the lineup and meant to replace the IBM PC and XT. Model 25s came with either an 8086 CPU running at 8 MHz, 512 KB of RAM, and 720 KB floppy disks, or 80286 CPU. The 8086s had ISA expansion slots and a built-in MCGA monitor, which could be either color or monochrome, while the 80286 models came with VGA monitor and ISA expansion slots. A cut-down Model M with no numeric keypad was standard, with the normal keyboard being an extra-cost option. There was a very rare later model called the PS/2 Model 25-SX which sported either a 16 MHz or 20 MHz 386 CPU, up to 12 MB of memory, IDE hard drive, VGA Monitor and 16 bit ISA slots making it the highest available model 25 available denoted by model number 8525-L41.

What are the three types of interrupt?

Case badge on a Model 25 SX (8525-L41)

The Model 30 had either an 8086 or 286 CPU and sported the full 101-key keyboard and standalone monitor along with three 8-bit ISA expansion slots. 8086 models had 720 KB floppies while 286 models had 1440 KB ones. Both the Model 25 and 30 could have an optional 20 MB ST-506 hard disk (which in the Model 25 took the place of the second floppy drive if so equipped and used a proprietary 3.5" form factor). 286-based Model 30s are otherwise a full AT-class machine and support up to 4 MB of RAM.

What are the three types of interrupt?

IBM Personal System/2 Model 25

Later ISA PS/2 models comprised the Model 30 286 (a Model 30 with an Intel 286 CPU), Model 35 (IBM 8535) and Model 40 (IBM 8540) with Intel 386SX or IBM 386SLC processors.

The higher-numbered models (above 50) were equipped with the Micro Channel bus and mostly ESDI or SCSI hard drives (models 60-041 and 80-041 had MFM hard drives). PS/2 Models 50 (IBM 8550) and 60 (IBM 8560) used the Intel 286 processor, the PS/2 Models 70 (IBM 8570) and 80 used the 386DX, while the mid-range PS/2 Model 55SX (IBM 8555-081) and used the 16/32-bit 386SX processor. The Model 50 was revised to the Model 50Z still with 10 MHz 80286 processor, but with memory run at zero wait state, and a switch to ESDI hard drives. Later Model 70 and 80 variants (B-xx) also used 25 MHz Intel 486 processors, in a complex called the Power Platform.

What are the three types of interrupt?

The externally very similar Models 60 and 80 next to each other

What are the three types of interrupt?

IBM Model 70 (case open over case closed)

The PS/2 Models 90 (IBM 8590/9590) and 95 (IBM 8595/9595/9595A) used Processor Complex daughterboards holding the CPU, memory controller, MCA interface, and other system components. The available Processor Complex options ranged from the 20 MHz Intel 486 to the 90 MHz Pentium and were fully interchangeable. The IBM PC Server 500, which has a motherboard identical to the 9595A, also uses Processor Complexes.

Other later Micro Channel PS/2 models included the Model 65SX with a 16 MHz 386SX; various Model 53 (IBM 9553), 56 (IBM 8556) and 57 (IBM 8557) variants with 386SX, 386SLC or 486SLC2 processors; the Models 76 and 77 (IBM 9576/9577) with 486SX or 486DX2 processors respectively; and the 486-based Model 85 (IBM 9585).

The IBM PS/2E (IBM 9533) was the first Energy Star compliant personal computer. It had a 50 MHz IBM 486SLC processor, an ISA bus, four PC card slots, and an IDE hard drive interface. The environmentally friendly PC borrowed many components from the ThinkPad line and was composed of recycled plastics, designed to be easily recycled at the end of its life, and used very little power.

The IBM PS/2 Server 195 and 295 (IBM 8600) were 486-based dual-bus MCA network servers supporting asymmetric multiprocessing, designed by Parallan Computer Inc.

The IBM PC Server 720 (IBM 8642) was the largest MCA-based server made by IBM, although it was not, strictly speaking, a PS/2 model. It could be fitted with up to six Intel Pentium processors interconnected by the Corollary C-bus and up to eighteen SCSI hard disks. This model was equipped with seven combination MCA/PCI slots.

PS/2 portables, laptops and notebooks

What are the three types of interrupt?

PS/2 N33SX laptop (1992)

IBM also produced several portable and laptop PS/2s, including the Model L40 (ISA-bus 386SX), N33 (IBM's first notebook-format computer from year 1991, Model 8533, 386SX), N51 (386SX/SLC), P70 (386DX) and P75 (486DX2).

The IBM ThinkPad 700C, aside from being labeled "700C PS/2" on the case, featured MCA and a 486SLC CPU.

6152 Academic System

The 6152 Academic System was a workstation computer developed by IBM's Academic Information Systems (ACIS) division for the university market introduced in February 1988. The 6152 was based on the PS/2 Model 60, adding a RISC Adapter Card on the Micro Channel bus. This card was a co-processor that enabled the 6152 to run ROMP software compiled for IBM's Academic Operating System (AOS), a version of BSD UNIX for the ROMP that was only available to select colleges and universities.[13]

The RISC Adapter Card contained the ROMP-C microprocessor (an enhanced version of the ROMP that first appeared in the IBM RT PC workstations), a memory management unit (the ROMP had virtual memory), a floating-point coprocessor, and up to 8 MB of memory for use by the ROMP.[14] The 6152 was the first computer to use the ROMP-C, which would later be introduced in new RT PC models.[15]

Marketing

During the 1980s, IBM's advertising of the original PC and its other product lines had frequently used the likeness of Charlie Chaplin. For the PS/2, however, IBM augmented this character with the following jingle:

How ya gonna do it?
PS/2 it!
It's as easy as IBM. (Or, "The solution is IBM.")

Another campaign featured actors from the television show M*A*S*H playing the staff of a contemporary (i.e. late-1980s) business in roles reminiscent of their characters' roles from the series. Harry Morgan, Larry Linville, William Christopher, Wayne Rogers, Gary Burghoff, Jamie Farr, and Loretta Swit were in from the beginning, whereas Alan Alda joined the campaign later.[16]

The profound lack of success of these advertising campaigns led, in part, to IBM's termination of its relationships with its global advertising agencies; these accounts were reported by Wired magazine to have been worth over $500 million a year, and the largest such account review in the history of business.[17]

Overall, the PS/2 line was largely unsuccessful with the consumer market, even though the PC-based Models 30 and 25 were an attempt to address that. With what was widely seen as a technically competent but cynical attempt to gain undisputed control of the market, IBM unleashed an industry backlash, which went on to standardize VESA, EISA and PCI. In large part, IBM failed to establish a link in the consumer's mind between the PS/2 MicroChannel architecture and the immature OS/2 1.x operating system; the more capable OS/2 version 2.0 did not release until 1992.[18]

The firm suffered massive financial losses for the remainder of the 1980s, forfeiting its previously unquestioned position as the industry leader, and eventually lost its status as the largest manufacturer of personal computers, first to Compaq and then to Dell. From a high of 10,000 employees in Boca Raton before the PS/2 came out, only seven years later, IBM had $600 million in unsold inventory and was laying off staff by the thousands.[19][20] After the failure of the PS/2 line to establish a new standard, IBM was forced to revert to building ISA PCs—following the industry it had once led—with the low-end PS/1 line and later with the more compatible Aptiva and PS/ValuePoint lines.

Still, the PS/2 platform experienced some success in the corporate sector where the reliability, ease of maintenance and strong corporate support from IBM offset the rather daunting cost of the machines. Also, many people still lived with the motto "Nobody ever got fired for buying an IBM". In the mid-range desktop market, the models 55SX and later 56SX were the leading sellers for almost their entire lifetimes. Later PS/2 models saw a production life span that took them into the late 1990s, within a few years of IBM selling off the division.

See also

Successors

  • IBM PS/ValuePoint
  • Ambra Computer Corporation
  • IBM Aptiva

Concurrent

  • IBM PS/1

Notes

  1. ^ From left to right: a Server 95, a Model 80, a Model 25, and a PS/2E on top of a Model 56 and a Model 30 286

References

  1. ^ Singh, Jai (April 10, 1995). "MCA, PS/2 bite the dust; OS/2 to follow?". InfoWorld. 17 (15): 3.
  2. ^ Tooley, Mike (1995). PC-based Instrumentation and Control. p. 19. ISBN 9780080938271 – via Google Books.
  3. ^ Clancy, Heather (June 2, 1988). "IBM adds to second generation of personal computers". UPI. p. 219.
  4. ^ IBM Personal System/2 Hardware Interface Technical Reference (PDF). IBM. May 1988. 68X2330. Retrieved 2016-11-26.
  5. ^ "PS/2 Reference Manuals". MCA Mafia. 2006-03-04. Retrieved 2016-11-26.
  6. ^ Borrell, Jerry (May 1992). "Opening Pandora's Box". Macworld. pp. 21–22.
  7. ^ a b c d e BYTE editorial staff (June 1987). "The IBM PS/2 Computers". BYTE. p. 100. Retrieved 5 November 2013.
  8. ^ http://www.mcamafia.de/pdf/server_hmm_s30h2501_01.pdf[bare URL PDF]
  9. ^ Gilbert Held (2000). Server Management. CRC Press. p. 199. ISBN 978-1-4200-3106-5.
  10. ^ Jim Porter (1998-12-14). "100th Anniversary Conference: Magnetic Recording and Information Storage" (PDF). disktrend.com. Archived from the original (PDF) on 2012-03-28. Retrieved 2014-03-24.
  11. ^ Ohland, Louis. "floppy". ohlandl.ipv7.net.
  12. ^ "The IBM PS/2: 25 Years of PC History". PCWorld. Retrieved 2018-08-28.
  13. ^ LaPlante, Alice (1988-02-08). "Workstation Merges PS/2, RT Technology". InfoWorld. Vol. 10, no. 6. pp. 1, 81.
  14. ^ IBM Academic System 6152: Quick Reference and Reference Diskette. January 1988. p. 2.
  15. ^ The University of Michigan Computing Center (c. 1988). "UNIX Notes". U-M Computing News. Vol. 3. p. 19.
  16. ^ "M*A*S*H Cast Commercials - IMB PS/2". YouTube. Archived from the original on 2015-05-28. Retrieved 13 April 2014.
  17. ^ Wired, Issue 3.08, August 1995
  18. ^ McCracken, Harry (2 April 2012). "25 Years of IBM's OS/2: The Strange Days and Surprising Afterlife of a Legendary Operating System". Time – via techland.time.com.
  19. ^ "IBM in Boca Raton | First Personal Computer | Companies in Boca Raton".
  20. ^ Vijayan, Jaikumar (August 1, 1994). "IBM cuts PC force, kills Ambra Corp". Computerworld. Vol. 28, no. 31. p. 4.

Further reading

  • Burton, Greg. IBM PC and PS/2 pocket reference. NDD (the old dealer channel), 1991.
  • Byers, T.J. IBM PS/2: A Reference Guide. Intertext Publications, 1989. ISBN 0-07-009525-6.
  • Dalton, Richard and Mueller, Scott. IBM PS/2 Handbook . Que Publications, 1989. ISBN 0-88022-334-0.
  • Held, Gilbert. IBM PS/2: User's Reference Manual. John Wiley & Sons Inc., 1989. ISBN 0-471-62150-1.
  • Hoskins, Jim. IBM PS/2. John Wiley & Sons Inc., fifth revised edition, 1992. ISBN 0-471-55195-3.
  • Leghart, Paul M. The IBM PS/2 in-depth report. Pachogue, NY: Computer Technology Research Corporation, 1988.
  • Newcom, Kerry. A Closer Look at IBM PS/2 Microchannel Architecture. New York: McGraw-Hill, 1988.
  • Norton, Peter. Inside the IBM PC and PS/2. Brady Publishing, fourth edition 1991. ISBN 0-13-465634-2.
  • Outside the IBM PC and PS/2: Access to New Technology. Brady Publishing, 1992. ISBN 0-13-643586-6.
  • Shanley, Tom. IBM PS/2 from the Inside Out. Addison-Wesley, 1991. ISBN 0-201-57056-4.
What are the three types of interrupt?

  • IBM Type 8530
  • IBM PS/2 Personal Systems Reference Guide 1992 - 1995
  • Computercraft - The PS/2 Resource Center
  • Ardent Tool of Capitalism - covers all PS/2 models and adapters
  • PS/2 keyboard pinout
  • PS/2 Mouse/Keyboard Interfacing at the Wayback Machine (archived August 10, 2015)
  • Computer Chronicles episode on the PS/2
  • IBM PS/2 L40 SX (8543)
IBM personal computers
Preceded by

IBM PCjr

IBM Personal System/1
1990 - 1994
Succeeded by

IBM PS/ValuePoint

Preceded by

IBM Personal Computer XT
IBM Personal Computer AT

IBM Personal System/2
1987 - 1991
Succeeded by

IBM PC Series

Preceded by

IBM PC Convertible

Succeeded by

IBM ThinkPad

Preceded by

IBM JX (Japan only)

IBM Personal System/55 (Japan only)
1987 - 2001

Retrieved from "https://en.wikipedia.org/w/index.php?title=IBM_PS/2&oldid=1112841601"