Skip to content
UoL CS Notes

I/O Scheduling

COMP124 Lectures

I/O requests need to be scheduled to execute in an efficient order.

A good ordering can improve system performance, ensure devices are shared fairly amongst processes and reduce average I/O completion wait time.

Scheduling done via wait queues for each device:

  • I/O scheduler may rearrange the order of the queue to improve efficiency and response.
  • Priority may be given to requests requiring a fast response.
  • Choice of different scheduling algorithms available for disk I/O:
    • FCFS
    • Shortest Seek Time First (SSFT)

Buffering

Consider reading a sequence of characters from a device. Making a read request for each character is costly.

Instead, set up an area of memory called a buffer:

  • Read a block of chars into buffer in one operation.
  • Subsequent chars are taken directly from the buffer.
  • Only need to access device when buffer empties.

Similarly for writing:

  • Place each char in a buffer.
  • Send to device only when buffer is full.

Buffer writes can cause inconsistency problems:

  • May need to flush buffers periodically (Unix sync operation every 30 seconds).

Buffering Techniques

  • Double Buffering
    • Read/write one buffer while other being filler/emptied.
    • Done by graphics cards to avoid flicker.
  • Buffering may be done by:
    • Software
      • Operating system or library routines.
    • Hardware
      • Disk drive.
  • Direct Memory Access (DMA)
    • Fast devices may write directly into a memory buffer, interrupting CPU only when finished.
    • CPU might be delayed while DMA controller accesses memory (cycle stealing).

Caching

There is cache memory within the CPU. More generally, caching uses a faster medium to act in place of a slower one.

Areas of main memory can be used to cache copies of items on disk:

  • May also lead to inconsistency problems.

Buffering vs. Caching

Buffering is about making data transit more efficient whereas caching is making repeated use or short bursts more efficient.

Buffer Cache
Items viewed as data in transit. Items viewed as copies of original.
FIFO Random access.
Once item read, viewed as deleted. Items may be read many times.

Spooling

Some devices are non-sharable:

  • Multiple processes cannot write to it simultaneously, like a printer.

The solution is a daemon called a spooler.

SPOOL - Simultaneous peripheral operations online.

Method

  1. Processes send their printer output via the spooler daemon.
  2. Spooler creates a temp file for each process, and writes output to those files.
  3. When process completes, spooler adds file to a queue for printing (de-spooling).