Blog

Computer Memory and It’s Type?

Memory is internal storage areas in the computer system.

just like Our Human Body store “our day to day incidents” inside Our brain.

Computer/Mobile phone store information inside Memory.

The term memory identifies data storage that comes in the form of Silicon-chip’s ( pen-drives, SD cards), and the word storage is used for memory that exists on tapes or Hard- disks. Moreover, the term memory is usually used as a shorthand for physical memory, which refers to the actual silicon-chips capable of holding data.

Some computers also use virtual memory, which expands physical memory onto a hard disk.

Every computer comes with a certain amount of physical memory, usually referred to as main memory or RAM.

You can think of main memory as an array of boxes, each of which can hold a single byte of information.

So, A computer that has 1 megabyte of memory, therefore, can hold about 1 million bytes (or characters) of information.

Memory is major part of computers that categories into several types.

Memory is the best essential element of a computer because computer can’t perform simple tasks. The performance of computer mainly based on memory and CPU. Memory is internal storage media of computer that has several names such as majorly categorised into two types, Main memory and Secondary memory.

1. Primary Memory / Volatile Memory.

2. Secondary Memory / Non Volatile Memory.

1. Primary Memory / Volatile Memory:

Primary Memory also called as volatile memory because the memory can’t store the data permanently. Primary memory select any part of memory when user want to save the data in memory but that may not be store permanently on that location. It also has another name i.e. RAM.

Random Access Memory (RAM):

The primary storage is referred to as random access memory (RAM) due to the random selection of memory locations. It performs both read and write operations on memory. If power failures happened in systems during memory access then you will lose your data permanently. So, RAM is volatile memory. RAM categorized into following types.

  • DRAM
  • SRAM
  • DRDRAM

2. Secondary Memory / Non Volatile Memory:

Secondary memory is external and permanent memory that is useful to store the external storage media such as floppy disk, magnetic disks, magnetic tapes and etc cache devices. Secondary memory deals with following types of components.

Read Only Memory (ROM) :

ROM is permanent memory location that offer huge types of standards to save data. But it work with read only operation. No data lose happen whenever power failure occur during the ROM memory work in computers.

ROM memory has several models such names are following.

  • PROM
  • EPROM
  • EEPROM

Memory

RAM (random-access memory): This is  main memory. When used by itself, the term RAM refers “Random Access Memory” to read and write memory, that is, you can both write data into RAM and read data from RAM.

  1. SRAM ( Static Ram): Static random access memory uses multiple transistors, typically four to six, for each memory cell but doesn’t have a capacitor in each cell.
  2. DRAM( Dynamic Ram):Dynamic random access memory has memory cells with a paired transistor and capacitor requiring constant refreshing.
  3. DRDRAM (Direct Rambus DRAM (DRDRAM) ): Rambus is intended to replace the current main memory technology of dynamic random access memory (DRAM). Much faster data transfer rates from attached devices. Direct Rambus (DRDRAM) provides a two-byte (16 bit) bus rather than DRAM’s 8-bit bus. At a RAM speed of 800 megahertz (800 million cycles per second), the peak data transfer rate is 1.6 billion bytes per second.

ROM (read-only memory): Computers almost always contain a small amount of read-only memory that holds instructions for starting up the computer. Unlike RAM, ROM cannot be written to.

  1. PROM (programmable read-only memory): A PROM is a memory chip on which you can store a program. But once the PROM has been used, you cannot wipe it clean and use it to store something else. Like ROMs, PROMs are non-volatile.
  2. EPROM (erasable programmable read-only memory): An EPROM is a special type of PROM that can be erased by exposing it to ultraviolet light.
  3. EEPROM (electrically erasable programmable read-only memory): An E2PROM is a special type of PROM that can be erased by exposing it to an electrical charge.

This is in contrast to ROM, which permits you only to read data. Most RAM is volatile, which means that it requires a steady flow of electricity to maintain its contents. As soon as the power is turned off, whatever data was in RAM is lost.

  

 Memory Types Used in Micro-Controller’s/Micro-Processor’s

 As memory technology has matured in recent years, the line between RAM and ROM has blurred. Now, several types of memory combine features of both. These devices do not belong to either group and can be collectively referred to as hybrid memory devices. Hybrid memories can be read and written as desired, like RAM, but maintain their contents without electrical power, just like ROM. Two of the hybrid devices, EEPROM and flash, are descendants of ROM devices. These are typically used to store code.

Hybrids

As memory technology has matured in recent years, the line between RAM and ROM has blurred. Now, several types of memory combine features of both. These devices do not belong to either group and can be collectively referred to as hybrid memory devices. Hybrid memories can be read and written as desired, like RAM, but maintain their contents without electrical power, just like ROM. Two of the hybrid devices, EEPROM and flash, are descendants of ROM devices.

These are typically used to store code.

EEPROMs are electrically-erasable-and-programmable. Internally, they are similar to EPROMs, but the erase operation is accomplished electrically, rather than by exposure to ultraviolet light. Any byte within an EEPROM may be erased and rewritten. Once written, the new data will remain in the device forever–or at least until it is electrically erased. The primary trade-off for this improved functionality is higher cost, though write cycles are also significantly longer than writes to a RAM. So you wouldn’t want to use an EE-PROM for your main system memory.

Flash memory combines the best features of the memory devices described thus far. Flash memory devices are high density, low cost, nonvolatile, fast (to read, but not to write), and electrically re-programmable.

These advantages are overwhelming and, as a direct result, the use of flash memory has increased dramatically in embedded systems. From a software viewpoint, flash and EEPROM technologies are very similar. The major difference is that flash devices can only be erased one sector at a time, not byte-by-byte. Typical sector sizes are in the range 256 bytes to 16KB. Despite this disadvantage, flash is much more popular than EEPROM and is rapidly displacing many of the ROM devices as well.

The third member of the hybrid memory class is NVRAM (non-volatile RAM). Nonvolatility is also a characteristic of the ROM and hybrid memories discussed previously.

However, an NVRAM is physically very different from those devices. An NVRAM is usually just an SRAM with a battery backup. When the power is turned on, the NVRAM operates just like any other SRAM. When the power is turned off, the NVRAM draws just enough power from the battery to retain its data. NVRAM is fairly common in embedded systems.

However, it is expensive–even more expensive than SRAM, because of the battery–so its applications are typically limited to the storage of a few hundred bytes of system-critical information that can’t be stored in any better way.

Type Volatile? Writeable? Erase Size Max Erase Cycles Cost (per Byte) Speed
SRAM Yes Yes Byte Unlimited Expensive Fast
DRAM Yes Yes Byte Unlimited Moderate Moderate
Masked ROM No No n/a n/a Inexpensive Fast
PROM No Once, with a device programmer n/a n/a Moderate Fast
EPROM No Yes, with a device programmer Entire Chip Limited (consult datasheet) Moderate Fast
EEPROM No Yes Byte Limited (consult datasheet) Expensive Fast to read, slow to erase/write
Flash No Yes Sector Limited (consult datasheet) Moderate Fast to read, slow to erase/write
NVRAM No Yes Byte Unlimited Expensive (SRAM + battery) Fast

MemoryMap

MemoryMap

When working with micro-controller’s, many of the tasks usually consist of controlling the peripherals that are connected to the device, respectively programming the subsystems that are contained in the controller (which by itself communicate with the circuitry connected to the controller).

The AVR series of microcontrollers offers two different ways to perform this task:

  • There’s a separate I/O address space available that can be addressed with specific I/O instructions that are applicable to some or all of the I/O address space (in, out, sbi etc.), known as I/O mapping.
  • The entire I/O address space is also made available as memory-mapped I/O, i. e. it can be accessed using all the MCU instructions that are applicable to normal data memory. The I/O register space is mapped into the data memory address space with an offset of 0x20 since the bottom of this space is reserved for direct access to the MCU registers. (Actual SRAM is available only behind the I/O register area, starting at either address 0x60, or 0x100 depending on the device.).

This is why there are two sets of addresses in the AVR data sheets. The first address is using I/O mapping and the second address in brackets is the memory mapped approach.

Registers

Registers are special storages with 8 bits capacity and they look like this:

7 6 5 4 3 2 1 0

Note the numeration of these bits: the least significant bit starts with zero (20 = 1).
A register can either store numbers from 0 to 255 (positive number, no negative values), or numbers from -128 to +127 (whole number with a sign bit in bit 7), or a value representing an ASCII-coded character (e.g. ‘A’), or just eight single bits that do not have something to do with each other (e.g. for eight single flags used to signal eight different yes/no decisions).
The special character of registers, compared to other storage sites, is that

  • they can be used directly in assembler commands,
  • operations with their content require only a single command word,
  • they are connected directly to the central processing unit called the accumulator,
  • they are source and target for calculations.

There are 32 registers in an AVR. They are originally named r0 to r31. However certain instructions will not work in registers r0 to r15 e.g. ldi will only work on registers r16 to r31.

 

von Neumann architecture and Harvard architecture

Harvard architecture has separate data and instruction busses, allowing transfers to be performed simultaneously on both busses.

von Neumann architecture has only one bus which is used for both data transfers and instruction fetches.

therefore data transfers and instruction fetches must be scheduled – they can not be performed at the same time. but It is possible to have two separate memory systems for a Harvard architecture. 

Following are the difference between harvard architecture and von-neumann architecture

Harward Architecture     Von Neumann Architecture
Screenshot (2).png     vv
The name is originated from “Harvard Mark I” a relay based old computer. It is named after the mathematician and early computer scientist John Von Neumann.
It required two memories for their instruction and data. It required only one memory for their instruction and data.
Design of Harvard architecture is complicated. Design of the von Neumann architecture is simple.
Harvard architecture is required separate bus for instruction and data. Von Neumann architecture is required only one bus for instruction and data.
Processor can complete an instruction in one cycle Processor needs two clock cycles to complete an instruction.
Easier to pipeline, so high performance can be achieve. Low performance as compared to Harvard architecture.
Comparatively high cost. It is cheaper.

 

 

RISC and CISC Architecture

What is RISC?

A reduced instruction set computer is a computer which only uses simple commands that can be divided into several instructions which achieve low-level operation within a single CLOCK cycle, as its name proposes “Reduced Instruction Set”.

Example: Apple iPod and Nintendo DS etc.

What is CISC?

A complex instruction set computer is a computer where single instructions can perform numerous low-level operations like a load from memory, an arithmetic operation, and a memory store or are accomplished by multi-step processes or addressing modes in single instructions, as its name proposes “Complex Instruction Set ”.

Example:IBM 370/168, VAX 11/780, Intel 80486 etc.

Difference Between CISC and RISC

The main difference between RISC and CISC is in the number of computing cycles each of their instructions take. The difference the number of cycles is based on the complexity and the goal of their instructions.

Architectural Characterstics Complex Instruction Set Computer(CISC) Reduced Instruction Set Computer(RISC)
Instruction size and format Large set of instructions with variable formats (16-64 bits per instruction). Small set of instructions with fixed format (32 bit).
Data transfer Memory to memory. Register to register.
CPU control Most micro coded using control memory (ROM) but modern CISC use hardwired control. Mostly hardwired without control memory.
Instruction type Not register based instructions. Register based instructions.
Memory access More memory access. Less memory access.
Clocks Includes multi-clocks. Includes single clock.
Instruction nature Instructions are complex. Instructions are reduced and simple.

Interrupt and Polling

What is Polling ?

Polling the device usually means reading its status register every so often until the device’s status changes to indicate that it has completed the request.

Polling means you won’t know when the data is ready, but you can get it when you are ready. You’ll have to tell your program how to wait for the data.

Example :

below mentioned code, also is similar example of Transmit data by using polling.

2222222222

In above code, This line : while( ! (USCSA & (1 << UDRE))  simply waits for the transmit buffer to be empty by checking the UDRE flag, before loading it with new data to be transmitted.

below mentioned code, also is similar example of receive data by using polling.

1111

In Above code, This line : while( ! (USCSA & (1 << RXC)) simply waits for data to be present in the receive buffer by checking the RXC flag, before reading the buffer and returning the value.

What is Interrupt?

In systems programming, an interrupt is a signal to the processor
It can be emitted either by hardware or software indicating an event that needs immediate attention.

Interrupt triggers instantly when the data is ready, and the interrupt handler can fetch the data before the next instruction completes.

Example :

below mentioned code, also is similar example of receive data by using interrupt.

ISR(USART_RXC_vect){
   value = UDR;             //read UART register into value
}

In Above codeThis line :- ISR(USART_RXC_vect)  simply execute ISR, when data receive interrupt will occur and will return the value.

Below code is ADC data conversion using Interrupt :-
Screenshot from 2018-06-19 02:23:23.png

 

What is ISR:-

 “Interrupt Service Routine.” An ISR (also called an interrupt handler) is a software process invoked by an interrupt request from a hardware device. It handles the request and sends it to the cpu, interrupting the active process. When the ISR is complete, the process is resumed.

BASIS FOR COMPARISON INTERRUPT POLLING
Basic Device notify CPU that it needs CPU attention. CPU constantly checks device status whether it needs CPU’s attention.
Mechanism An interrupt is a hardware mechanism. Polling is a Protocol.
Servicing Interrupt handler services the Device. CPU services the device.
Indication Interrupt-request line indicates that device needs servicing. Command-ready bit indicates the device needs servicing.
CPU CPU is disturbed only when a device needs servicing, which saves CPU cycles. CPU has to wait and check whether a device needs servicing which wastes lots of CPU cycles.
Occurrence An interrupt can occur at any time. CPU polls the devices at regular interval.
Efficiency Interrupt becomes inefficient when devices keep on interrupting the CPU repeatedly. Polling becomes inefficient when CPU rarely finds a device ready for service.
Example Let the bell ring then open the door to check who has come. Constantly keep on opening the door to check whether anybody has come.

Types of Interrupt :

Interrupts are a commonly used technique in real-time computing and such a system is said to be interrupt-driven.

Hardware Software (Exception/Trap) Software (Instruction set)
Interrupt Request(IRQ) sent from device to processor. Exception/Trap sent from processor to processor, caused by an exceptional condition in the processor itself. A special instruction in the instruction set which causes an interrupt when it is executed.

Hardware interrupt
A hardware interrupt is a signal which can tell the CPU that something happen in hardware device, and should be immediately responded. Hardware interrupts are triggered by peripheral devices outside the microcontroller. An interrupt causes the processor to save its state of execution and begin execution of an interrupt service routine.
Unlike the software interrupts, hardware interrupts are asynchronous and can occur in the middle of instruction execution, requiring additional care in programming. The act of initiating a hardware interrupt is referred to as an interrupt request (IRQ).

Software interrupt
Software interrupt is an instruction which cause a context switch to an interrupt handler similar to a hardware interrupt. Usually it is an interrupt generated within a processor by executing a special instruction in the instruction set which causes an interrupt when it is executed.
Another type of software interrupt is triggered by an exceptional condition in the processor itself. This type of interrupt is often called a trap or exception.
Unlike the hardware interrupts where the number of interrupts is limited by the number of interrupt request (IRQ) lines to the processor, software interrupt can have hundreds of different interrupts.

What is interrupt latency?

Interrupt latency refers primarily to the software interrupt handling latencies.

In other words, the amount of time that elapses from the time that an external interrupt arrives at the processor until the time that the interrupt processing begins.

One of the most important aspects of kernel real-time performance is the ability to service an interrupt request (IRQ) within a specified amount of time.

IRQ vs FIQ
  1. IRQ (Interrupt Request)
    An (or IRQ) is a hardware signal sent to the processor that temporarily stops a running program and allows a special program, an interrupt handler, to run instead. Interrupts are used to handle such events as data receipt from a modem or network, or a key press or mouse movement.
  2. FIQ (Fast Interrupt Request)
    An FIQ is just a higher priority interrupt request, that is prioritized by disabling IRQ and other FIQ handlers during request servicing. Therefore, no other interrupts can occur during the processing of the active FIQ interrupt.

Little Endian vs Big Endian

Endianness refers to the sequential order in which bytes are arranged into larger numerical values when stored in memory.

Simple way to remember is “little endian, the least significant byte will go to lowest address index. and Vice versa.

Little endian and big endian are two ways of storing multi byte/nibble data-types ( i.e. int, float, etc).

  little_big_endian

in big endian machines,

first byte of binary representation of the multi

byte data-type is stored first.

On the other hand, In little endian machines,

last byte of binary representation of the multi 

byte data-type is stored first.

Current architectures

The Intel x86 and also AMD64 / x86-64 series of processors use the little-endian format, and for this reason, it is also known in the industry as the “Intel convention“.

Some current big-endian architectures include the IBM z/architecture, Freescale ColdFire (which is Motorola 68000 series-based), Xilinx MicroBlaze, Atmel AVR-32.

As a consequence of its original implementation on the Intel 8080 platform, the operating system-independent FAT file system is defined to use little-endian byte ordering, even on platforms using other endiannesses natively.

 

Big-endian is the most common format in data networking, fields in the protocols of the Internet protocol suite, such as IPv4, IPv6, TCP, and UDP, are transmitted in big-endian order. For this reason, big-endian byte order is also referred to as network byte order.

Little-endian storage is popular for microprocessors, in part due to significant influence on microprocessor designs by Intel Corporation.

Example :-

Suppose integer is stored as 4 bytes then a variable var with value 0x01234567 will be stored as following.

 

C code to see your system memory presentation as well as Endianness :-

Source Code :-

  • checking memory representation :

Screenshot from 2018-06-17 16:59:40

  • Now, checking Endianness of system also :

Screenshot from 2018-06-17 17:04:22.png

LittleBigEndiab

Simple C code to check endianness of given system in simple way 

Example :-

Screenshot from 2018-06-17 17:24:58.png

To simplify above code :-

Screenshot from 2018-06-17 17:31:30

Some other ways to find out the Endianness :-

Screenshot from 2018-06-17 17:40:47.png

MEMORY-MAPPED I/O VS I/O-MAPPED I/O

Normally, Micro -Controller’s / Micro -Processor’s use two methods to connect external devices:

  1. Memory Mapped I/O.
  2. I/O Mapped I/O (also, known as Port Mapped I/O).                                                      However, as far as the peripheral is concerned, both methods are really identical.

    Memory mapped I/O is mapped into the same address space as program memory and/or user memory, and is accessed in the same way.

    I/O mapped I/O uses a separate, dedicated address space and is accessed via a dedicated set of microprocessor instructions.

    The difference between the two schemes occurs within the Micro processor’s / Micro controller’s. Intel has, for the most part, used the I/O mapped scheme for their microprocessors and Motorola has used the memory mapped scheme.

    As 16-bit processors have become obsolete and replaced with 32-bit and 64-bit in general use, reserving ranges of memory address space for I/O is less of a problem, as the memory address space of the processor is usually much larger than the required space for all memory and I/O devices in a system.

    Therefore, it has become more frequently practical to take advantage of the benefits of memory-mapped I/O. However, even with address space being no longer a major concern, neither I/O mapping method is universally superior to the other, and there will be cases where using port-mapped I/O is still preferable.

Memory-mapped IO (MMIO)

Memory_mapped_io.png

I/O devices are mapped into the system memory map along with RAM and ROM. To access a hardware device, simply read or write to those ‘special’ addresses using the normal memory access instructions.

The advantage to this method is that every instruction which can access memory can be used to manipulate an I/O device.

The disadvantage to this method is that the entire address bus must be fully decoded for every device.

For example, a machine with a 32-bit address bus would require logic gates to resolve the state of all 32 address lines to properly decode the specific address of any device. This increases the cost of adding hardware to the machine.

I/O-mapped IO (PORT Mapped IO)

Port_Mapped_io.png

I/O devices are mapped into a separate address space. This is usually accomplished by having a different set of signal lines to indicate a memory access versus a port access.

The address lines are usually shared between the two address spaces, but less of them are used for accessing ports. An example of this is the standard PC which uses 16 bits of port address space, but 32 bits of memory address space.

The advantage to this system is that less logic is needed to decode a discrete address and therefore less cost to add hardware devices to a machine. On the older PC compatible machines, only 10 bits of address space were decoded for I/O ports and so there were only 1024 unique port locations; modern PC’s decode all 16 address lines. To read or write from a hardware device, special port I/O instructions are used.

From a software perspective, this is a slight disadvantage because more instructions are required to accomplish the same task.

For instance, if we wanted to test one bit on a memory mapped port, there is a single instruction to test a bit in memory, but for ports we must read the data into a register, then test the bit.