Blog

Introduction to RTOS Part-1

What is an RTOS?

An Operating System (OS) is in charge of managing the resources of your computer. It will put tasks on the CPU and take tasks off the CPU, giving each task the illusion that it owns the entire processor. It will implement some kind of scheduling policy, which dictates when a task should be given time to run on the CPU and for how long. It will also manage the memory in the system and the various bits of hardware attached.

A general OS, such as Windows or Linux, is normally designed for throughput and fairness, with a reasonably fast response on average. The OS wants to get as much work done as possible during any period of time. It is designed to be fair, so that most tasks will get at least some time on the CPU, which means that even if things run slowly, you can still use your text editor and listen to your favourite music at the same time. The time taken to respond to events, such as your sound card requesting more audio data, will on average be quick. If the system is busy, however, it may or may not handle the event in a timely manner and the worst case here is that you get a brief “glitch” in your music.

But what if that “glitch” in your music was actually something more serious? What if the event was “a child has stepped out in front of a driverless car”? In this case we want the OS to service this event as soon as possible, 100% reliably. We’d certainly be happy for our music to be interrupted to service this event! This use case requires an RTOS, which will guarantee to service the event within a defined, minimal time.

These are two extremes and there are clearly cases in the middle. Perhaps the event is a machine exceeding a safety limit and the result is just a broken machine. Either way, in all of these scenarios, if the event is not serviced within its deadline, something “bad” happens. In RTOS parlance we call the ability to execute the right thing at the right time “correctness”.

An RTOS guarantees something that a more general OS won’t: it is deterministic and therefore predictable, which means that with a correctly designed system it will reliably meet its deadlines. It favours responsiveness over throughput and correctness over fairness.

This becomes especially important in busy systems, where no matter how busy it gets, the OS has to guarantee that it will respond to critical events.

“Real-time is not about fast execution or performance … It’s basically about determinism and timing guarantees. Real-time gives you a guarantee that something will execute within a given time frame. You don’t want to be as fast as possible, but as fast as specified.”

What is “soft” and “hard” real-time?

What is “soft” or “hard”, however, is the scenario in which the RTOS is being used. That is, it is not the RTOS itself, but its use case. If failure to meet a deadline would lead to someone being run over, this could not be tolerated under any circumstances. The system must hit 100% of the deadlines 100% of the time. This is a hard real-time use case.

If the use case requirement was that the RTOS must hit 100% of its deadlines 95% of the time, we would call this a “soft” real-time constraint. For example, if the only consequence of missing the deadline is a broken machine, and no broken people, we might be willing to tolerate some margin for error. But then again, we might not. So what is “hard” or “soft” real-time can be somewhat subjective.

The figure below shows another way of describing a hard real-time system.

 

The system has a step function in its performance: it is all-or-nothing, which represents our requirement that the system hit 100% of its deadlines 100% of the time. If it can perform a job, be it a periodically scheduled job or a response to an event, all is fine, but if it cannot, then the result is complete performance failure. For periodic tasks an early job output might be as unacceptable as a late output, and this is what the solid line represents. The dotted line represents a system that can accept an early output. When considering response to asynchronous events, we normally consider a step rather than a notch (use the dotted line).

In contrast, the performance of a soft real-time system can be represented using the figure below.

 

The graph represents the concept that performance can degrade whilst still providing an acceptable output.

So, these are the basics of what a real-time OS is and why you may need one. The next article in this series will discuss the methods an RTOS uses to provide determinism, predictability and timing guarantees.

Linux vim Editor command

About vim :

vim, which stands for “Vi Improved“, is a Text Editor. It can be used for editing any kind of text and is especially suited for editing computer program.

Description

vim is a text editor that is upwards compatible with Vi. There are a lot of enhancements above Vi: multi-level undo, multiple windows and buffers, syntax highlighting,  command line editing, filename completion, a complete help system, visual selection, and others.

Screenshot from 2018-07-28 18:25:46.png

Vim has a particular working method, there are two main modes:

the command mode and the other modes.

The command mode lets you select the working mode that you want to enter.

Available modes are: save, quit, copy, paste and that kind of things but you can’t edit the file in the command mode directly.

This is what many users that are new to vim puzzles and one has to get used to first.

Vim modes

There are several other modes, I’ll cover only the most widely used ones here.

Insert Mode

The Insert mode lets you insert text in a document. The shortcut is: “i” (insert text where the cursor is) or “o” (insert text at the beginning of the following line).

Visual Mode

The visual mode permits the user to select the text like you would do with a mouse, but using the keyboard instead of the mouse. Useful to copy several lines f text for example. The shortcut is: “V“.

Command Mode

Let’s now speak about the command mode, a command begins with the symbol “:”.

When you are in another mod you can use the escape key (sometimes you’ll need to hit it twice) to come back to command mode at any time.

Vim usage example

To start using vim, just run the “vim” command on the Linux shell followed by the path of the file that you want to edit.

Opening terminal: alt+cntr+enter,

then follow Example: Writing c code using vim editor

vim c_code_with_vim_editor.c

Screenshot from 2018-07-28 19:26:32.png

and then hit enter.Screenshot from 2018-07-28 19:30:14

The editor is now in command mode. To start editing the file content, enter:

:i[enter]

[enter] means to press the return or enter key on your keyboard.

The word –insert– will appear at the bottom of the editor window to show that you are in insert mode now.

crop_vim

Now you can edit the file by navigating to the line that you want to change with the cursor keys and then start typing the text.

Screenshot from 2018-07-28 19:51:19

When you are finished with editing, press the [esc] key to go back to the command mode.

Screenshot from 2018-07-28 19:52:17

To save the file and exit the editor, enter:

:x[return]

Screenshot from 2018-07-28 19:53:36

In case you want to quit vim, saving the file, enter:

:wq[enter]

Screenshot from 2018-07-28 19:54:49.png

you can see the file (c_code_with_vim_editor.c) you just have created,

typing following  command: cat c_code_with_vim_editor.c [enter]

Screenshot from 2018-07-28 19:57:41.png

and for compiling and running you may use following command’s :

Compiling: gcc c_code_with_vim_editor.c [enter]

Executing: ./a.out

Screenshot from 2018-07-28 20:00:47.png

 

Vim Command Reference

save: :w
save and exit: :wq
exit: :q
force: ! (example :w! :q!)
vertical split: open a document and then type: split /path-to-document/document and this will open the specified document and split the screen so you can see both documents.
copy: y
copy a line: yy
paste: p
cut: d
cut a line: dd

These are the very basic commands for vim, but they are useful as vim or vi is preinstalled on most Linux systems. I hope this will help you configure your Linux.

All About Thread’S

What is a Thread?

A thread is a path of execution within a process. Also, a process can contain multiple threads.

Why Multithreading?

Thread is also known as a lightweight process. The idea is to achieve parallelism by dividing a process into multiple threads. For example, in a browser, multiple tabs can be different threads. MS word uses multiple threads, one thread to format the text, other thread to process inputs etc.

Process vs Thread?
The typical difference is that threads within the same process run in a shared memory space, while processes run in separate memory spaces.

Threads are not independent of one other like processes as a result threads share with other threads in their code section, data section and OS resources like open files and signals. But, like process, a thread has its own program counter (PC), a register set, and a stack space.

  • Each thread belongs to exactly one process and no thread can exist outside a process. Each thread represents a separate flow of control.
  • processes are typically independent, while threads exist as subsets of a process
    processes carry considerably more state information than threads, whereas multiple threads within a process share process state as well as memory and other resources
  • processes have separate address spaces, whereas threads share their address space
    processes interact only through system-provided inter-process communication mechanisms context switching between threads in the same process is typically faster than context switching between processes.                                                                  Process                                                             Threadthread1.png

Advantages of Thread over Process
1. ResponsivenessIf the process is divided into multiple threads, if one thread completed its execution, then its output can be immediately responded.

2. Faster context switch: Context switch time between threads is less compared to process context switch. Process context switch is more overhead for CPU.

3. Effective Utilization of Multiprocessor systemIf we have multiple threads in a single process, then we can schedule multiple threads on multiple processors. This will make process execution faster.

4. Resource sharing: Resources like code, data and file can be shared among all threads within a process.
Note: stack and registers can’t be shared among the threads. Each thread has its own stack and registers.

5. Communication: Communication between multiple threads is easier as thread shares common address space. while in the process we have to follow some specific communication technique for communication between two processes.

6. Enhanced Throughput of the system: If the process is divided into multiple threads and each thread function is considered as one job, then the number of jobs completed per unit time is increased. Thus, increasing the throughput of the system.

           Serial Number              Process             Thread
1 Process is heavy weight or resource intensive. Thread is lightweight, taking lesser resources than a process.
2 Process switching needs interaction with operating system. Thread switching does not need to interact with operating system.
3 In multiple processing environments, each process executes the same code but has its own memory and file resources. All threads can share the same set of open files, child processes.
4 If one process is blocked, then no other process can execute until the first process is unblocked. While one thread is blocked and waiting, a second thread in the same task can run.
5 Multiple processes without using threads use more resources. Multiple threaded processes use fewer resources.
6 In multiple processes, each process operates independently of the others. One thread can read, write or change another thread’s data.

Advantages of Thread

  • Threads minimize the context switching time.
  • Use of threads provides concurrency within a process.
  • Efficient communication.
  • It is more economical to create and context switch threads.
  • Threads allow utilization of multiprocessor architectures to a greater scale and efficiency.

Thread Basics:

  • Thread operations include thread creation, termination, synchronization (joins, blocking), scheduling, data management and process interaction.
  • A thread does not maintain a list of created threads, nor does it know the thread that created it.
  • All threads within a process share the same address space.
  • Threads in the same process share:
    • Process instructions
    • Most data
    • open files (descriptors)
    • signals and signal handlers
    • current working directory
    • User and group id
  • Each thread has a unique:
    • Thread ID
    • set of registers, stack pointer
    • stack for local variables, return addresses
    • signal mask
    • priority
    • Return value: errno
  • pthread functions return “0” if OK.

Thread Creation and Termination:

Example code: thread.c

#include <stdio.h>
#include <stdlib.h>
#include <pthread.h>

void *print_message_function( void *ptr );

main()
{
     pthread_t thread1, thread2;
     char *message1 = "Thread 1";
     char *message2 = "Thread 2";
     int  iret1, iret2;

    /* Create independent threads each of which will execute function */

     iret1 = pthread_create( &thread1, NULL, print_message_function, (void*) message1);
     iret2 = pthread_create( &thread2, NULL, print_message_function, (void*) message2);

     /* Wait till threads are complete before main continues. Unless we  */
     /* wait we run the risk of executing an exit which will terminate   */
     /* the process and all threads before the threads have completed.   */

     pthread_join( thread1, NULL);
     pthread_join( thread2, NULL); 

     printf("Thread 1 returns: %d\n",iret1);
     printf("Thread 2 returns: %d\n",iret2);
     exit(0);
}

void *print_message_function( void *ptr )
{
     char *message;
     message = (char *) ptr;
     printf("%s \n", message);
}

Compile:

  • C compiler: cc -lpthread pthread.c
    or
  • C++ compiler: g++ -lpthread pthread.c

Run: ./a.out
Results:

Thread 1
Thread 2
Thread 1 returns: 0
Thread 2 returns: 0

Details:

  • In this example, the same function is used in each thread. The arguments are different. The functions need not be the same.
  • Threads terminate by explicitly calling pthread_exit, by letting the function return, or by a call to the function exit which will terminate the process including any threads.
  • Function call: pthread_create
        int pthread_create(pthread_t * thread, 
                           const pthread_attr_t * attr,
                           void * (*start_routine)(void *), 
                           void *arg);
    

    Arguments:

    • thread – returns the thread id. (unsigned long int defined in bits/pthreadtypes.h)
    • attr – Set to NULL if default thread attributes are used. (else define members of the struct pthread_attr_t defined in bits/pthreadtypes.h) Attributes include:
        • detached state (joinable? Default: PTHREAD_CREATE_JOINABLE. Other option: PTHREAD_CREATE_DETACHED)

        • scheduling policy (real-time? PTHREAD_INHERIT_SCHED, PTHREAD_EXPLICIT_SCHED, SCHED_OTHER) scheduling parameter

        • inherit sched attribute (Default: PTHREAD_EXPLICIT_SCHED Inherit from parent thread: PTHREAD_INHERIT_SCHED)

        • scope (Kernel threads: PTHREAD_SCOPE_SYSTEM User threads: PTHREAD_SCOPE_PROCESS Pick one or the other, not both.) guard size

        • stack address (See unistd.h and bits/posix_opt.h _POSIX_THREAD_ATTR_STACKADDR)

        • stack size (default minimum PTHREAD_STACK_SIZE set in pthread.h)

    • void * (*start_routine) – pointer to the function to be threaded. Function has a single argument: pointer to void.
    • *arg – pointer to argument of function. To pass multiple arguments, send a pointer to a structure.
  • Function call:pthread_exit
        void pthread_exit(void *retval);
    

    Arguments:

    • retval – Return value of thread.

    This routine kills the thread. The pthread_exit function never returns. If the thread is not detached, the thread id and return value may be examined from another thread by using pthread_join.

    Note: the return pointer *retval, must not be of local scope otherwise it would cease to exist once the thread terminates.

Thread Synchronization:

The threads library provides three synchronization mechanisms:

  1. mutexesMutual exclusion lock: Block access to variables by other threads. This enforces exclusive access by a thread to a variable or set of variables.
  2. joinsMake a thread wait till others are complete (terminated).
  3. condition variablesdata type pthread_cond_t

 

Mutexes:

Mutexes are used to prevent data inconsistencies due to race conditions.

A race condition often occurs when two or more threads need to perform operations on the same memory area, but the results of computations depend on the order in which these operations are performed.

  • Mutexes are used for serializing shared resources. Anytime a global resource is accessed by more than one thread the resource should have a Mutex associated with it.
  • One can apply a mutex to protect a segment of memory (“critical region”) from other threads.
  • Mutexes can be applied only to threads in a single process and do not work between processes as do semaphores.

Example threaded function:

Without Mutex With Mutex
int counter=0;

/* Function C */
void functionC()
{

   counter++

}
/* Note scope of variable and mutex are the same */
pthread_mutex_t mutex1 = PTHREAD_MUTEX_INITIALIZER;
int counter=0;

/* Function C */
void functionC()
{
   pthread_mutex_lock( &mutex1 );
   counter++
   pthread_mutex_unlock( &mutex1 );
}
Possible execution sequence
Thread 1 Thread 2 Thread 1 Thread 2
counter = 0 counter = 0 counter = 0 counter = 0
counter = 1 counter = 1 counter = 1 Thread 2 locked out.
Thread 1 has exclusive use of variable counter
counter = 2

If register load and store operations for the incrementing of variable counter occur with unfortunate timing, it is theoretically possible to have each thread increment and overwrite the same variable with the same value.

Another possibility is that thread two would first increment counter locking out thread one until complete and then thread one would increment it to 2.

Sequence Thread 1 Thread 2
1 counter = 0 counter=0
2 Thread 1 locked out.
Thread 2 has exclusive use of variable counter
counter = 1
3 counter = 2

Example code: mutex.c

#include <stdio.h>
#include <stdlib.h>
#include <pthread.h>

void *functionC();
pthread_mutex_t mutex1 = PTHREAD_MUTEX_INITIALIZER;
int  counter = 0;

main()
{
   int rc1, rc2;
   pthread_t thread1, thread2;

   /* Create independent threads each of which will execute functionC */

   if( (rc1=pthread_create( &thread1, NULL, &functionC, NULL)) )
   {
      printf("Thread creation failed: %d\n", rc1);
   }

   if( (rc2=pthread_create( &thread2, NULL, &functionC, NULL)) )
   {
      printf("Thread creation failed: %d\n", rc2);
   }

   /* Wait till threads are complete before main continues. Unless we  */
   /* wait we run the risk of executing an exit which will terminate   */
   /* the process and all threads before the threads have completed.   */

   pthread_join( thread1, NULL);
   pthread_join( thread2, NULL); 

   exit(0);
}

void *functionC()
{
   pthread_mutex_lock( &mutex1 );
   counter++;
   printf("Counter value: %d\n",counter);
   pthread_mutex_unlock( &mutex1 );
}

Compile: cc -lpthread mutex.c
Run: ./a.out
Results:

Counter value: 1
Counter value: 2

When a mutex lock is attempted against a mutex which is held by another thread, the thread is blocked until the mutex is unlocked. When a thread terminates, the mutex does not unless explicitly unlocked. Nothing happens by default.

Joins:

A join is performed when one wants to wait for a thread to finish.

A thread calling routine may launch multiple threads then wait for them to finish to get the results. One wait for the completion of the threads with a join.

Example code: join_thread.c

#include <stdio.h>
#include <pthread.h>

#define NTHREADS 10
void *thread_function(void *);
pthread_mutex_t mutex1 = PTHREAD_MUTEX_INITIALIZER;
int  counter = 0;

main()
{
   pthread_t thread_id[NTHREADS];
   int i, j;

   for(i=0; i < NTHREADS; i++)
   {
      pthread_create( &thread_id[i], NULL, thread_function, NULL );
   }

   for(j=0; j < NTHREADS; j++)
   {
      pthread_join( thread_id[j], NULL); 
   }
  
   /* Now that all threads are complete I can print the final result.     */
   /* Without the join I could be printing a value before all the threads */
   /* have been completed.                                                */

   printf("Final counter value: %d\n", counter);
}

void *thread_function(void *dummyPtr)
{
   printf("Thread number %ld\n", pthread_self());
   pthread_mutex_lock( &mutex1 );
   counter++;
   pthread_mutex_unlock( &mutex1 );
}

Compile: cc -lpthread join_thread.c
Run: ./a.out
Results:

Thread number 1026
Thread number 2051
Thread number 3076
Thread number 4101
Thread number 5126
Thread number 6151
Thread number 7176
Thread number 8201
Thread number 9226
Thread number 10251
Final counter value: 10

 

Condition Variables:
A condition variable is a variable of type pthread_cond_t and is used with the appropriate functions for waiting and later, process continuation.

  • The condition variable mechanism allows threads to suspend execution and relinquish the processor until some condition is true.
  • A condition variable must always be associated with a mutex to avoid a race condition created by one thread preparing to wait and another thread which may signal the condition before the first thread actually waits on it resulting in a deadlock.
  • The thread will be perpetually waiting for a signal that is never sent. Any mutex can be used, there is no explicit link between the mutex and the condition variable.

Functions used in conjunction with the condition variable:

Creating/Destroying:

pthread_cond_init
pthread_cond_t cond = PTHREAD_COND_INITIALIZER;
pthread_cond_destroy

Waiting on condition:

pthread_cond_wait
pthread_cond_timedwait – place limit on how long it will block.

Waking thread based on condition:

pthread_cond_signal
pthread_cond_broadcast – wake up all threads blocked by the specified condition variable.

Example code: condition.c

#include <stdio.h>
#include <stdlib.h>
#include <pthread.h>

pthread_mutex_t count_mutex     = PTHREAD_MUTEX_INITIALIZER;
pthread_mutex_t condition_mutex = PTHREAD_MUTEX_INITIALIZER;
pthread_cond_t  condition_cond  = PTHREAD_COND_INITIALIZER;

void *functionCount1();
void *functionCount2();
int  count = 0;
#define COUNT_DONE  10
#define COUNT_HALT1  3
#define COUNT_HALT2  6

main()
{
   pthread_t thread1, thread2;

   pthread_create( &thread1, NULL, &functionCount1, NULL);
   pthread_create( &thread2, NULL, &functionCount2, NULL);
   pthread_join( thread1, NULL);
   pthread_join( thread2, NULL);

   exit(0);
}

void *functionCount1()
{
   for(;;)
   {
      pthread_mutex_lock( &condition_mutex );
      while( count >= COUNT_HALT1 && count <= COUNT_HALT2 )
      {
         pthread_cond_wait( &condition_cond, &condition_mutex );
      }
      pthread_mutex_unlock( &condition_mutex );

      pthread_mutex_lock( &count_mutex );
      count++;
      printf("Counter value functionCount1: %d\n",count);
      pthread_mutex_unlock( &count_mutex );

      if(count >= COUNT_DONE) return(NULL);
    }
}

void *functionCount2()
{
    for(;;)
    {
       pthread_mutex_lock( &condition_mutex );
       if( count < COUNT_HALT1 || count > COUNT_HALT2 )
       {
          pthread_cond_signal( &condition_cond );
       }
       pthread_mutex_unlock( &condition_mutex );

       pthread_mutex_lock( &count_mutex );
       count++;
       printf("Counter value functionCount2: %d\n",count);
       pthread_mutex_unlock( &count_mutex );

       if(count >= COUNT_DONE) return(NULL);
    }

}

Compile: cc -lpthread condition.c
Run: ./a.out
Results:

Counter value functionCount1: 1
Counter value functionCount1: 2
Counter value functionCount1: 3
Counter value functionCount2: 4
Counter value functionCount2: 5
Counter value functionCount2: 6
Counter value functionCount2: 7
Counter value functionCount1: 8
Counter value functionCount1: 9
Counter value functionCount1: 10
Counter value functionCount2: 11

Note that functionCount1() was halted while count was between the values COUNT_HALT1 and COUNT_HALT2. The only thing that has been ensured is that functionCount2 will increment the count between the values COUNT_HALT1 and COUNT_HALT2. Everything else is random.

The logic conditions (the “if” and “while” statements) must be chosen to ensure that the “signal” is executed if the “wait” is ever processed. Poor software logic can also lead to a deadlock condition.

Note: Race conditions abound with this example because count is used as the condition and can’t be locked in the while statement without causing deadlock.

Thread Scheduling:

When this option is enabled, each thread may have its own scheduling properties. Scheduling attributes may be specified:

  • during thread creation
  • by dynamically by changing the attributes of a thread already created
  • by defining the effect of a mutex on the thread’s scheduling when creating a mutex
  • by dynamically changing the scheduling of a thread during synchronization operations.

The threads library provides default values that are sufficient for most cases.

Thread Pitfalls:

  • Race conditions: While the code may appear on the screen in the order you wish the code to execute, threads are scheduled by the operating system and are executed at random. It cannot be assumed that threads are executed in the order they are created. They may also execute at different speeds. When threads are executing (racing to complete) they may give unexpected results (race condition). Mutexes and joins must be utilized to achieve a predictable execution order and outcome.
  • Thread safe code: The threaded routines must call functions which are “thread safe”. This means that there are no static or global variables which other threads may clobber or read assuming single threaded operation. If static or global variables are used then mutexes must be applied or the functions must be re-written to avoid the use of these variables. In C, local variables are dynamically allocated on the stack. Therefore, any function that does not use static data or other shared resources is thread-safe. Thread-unsafe functions may be used by only one thread at a time in a program and the uniqueness of the thread must be ensured. Many non-reentrant functions return a pointer to static data. This can be avoided by returning dynamically allocated data or using caller-provided storage. An example of a non-thread safe function is strtok which is also not re-entrant. The “thread safe” version is the re-entrant version strtok_r.
  • Mutex Deadlock: This condition occurs when a mutex is applied but then not “unlocked”. This causes program execution to halt indefinitely. It can also be caused by poor application of mutexes or joins. Be careful when applying two or more mutexes to a section of code. If the first pthread_mutex_lock is applied and the second pthread_mutex_lock fails due to another thread applying a mutex, the first mutex may eventually lock all other threads from accessing data including the thread which holds the second mutex. The threads may wait indefinitely for the resource to become free causing a deadlock. It is best to test and if the failure occurs, free the resources and stall before retrying.
 ...
    pthread_mutex_lock(&mutex_1);
    while ( pthread_mutex_trylock(&mutex_2) )  /* Test if already locked   */
    {
       pthread_mutex_unlock(&mutex_1);  /* Free resource to avoid deadlock */
       ...
       /* stall here   */
       ...
       pthread_mutex_lock(&mutex_1);
    }
    count++;
    pthread_mutex_unlock(&mutex_1);
    pthread_mutex_unlock(&mutex_2);

The order of applying the mutex is also important. The following code segment illustrates a potential for deadlock:

 void *function1()
    {
       ...
       pthread_mutex_lock(&lock1);           - Execution step 1
       pthread_mutex_lock(&lock2);           - Execution step 3 DEADLOCK!!!
       ...
       ...
       pthread_mutex_lock(&lock2);
       pthread_mutex_lock(&lock1);
       ...
    } 

    void *function2()
    {
       ...
       pthread_mutex_lock(&lock2);           - Execution step 2
       pthread_mutex_lock(&lock1);
       ...
       ...
       pthread_mutex_lock(&lock1);
       pthread_mutex_lock(&lock2);
       ...
    } 
  
    main()
    {
       ...
       pthread_create(&thread1, NULL, function1, NULL);
       pthread_create(&thread2, NULL, function1, NULL);
       ...
    }
  • If function1 acquires the first mutex and function2 acquires the second, all resources are tied up and locked.
  • Condition Variable Deadlock: The logic conditions (the “if” and “while” statements) must be chosen to ensure that the “signal” is executed if the “wait” is ever processed.

Types of Thread

Threads are implemented in following two ways −

  • User Level Threads − User managed threads.
  • Kernel Level Threads − Operating System managed threads acting on kernel, an operating system core.

User Level Threads

In this case, the thread management kernel is not aware of the existence of threads. The thread library contains code for creating and destroying threads, for passing message and data between threads, for scheduling thread execution and for saving and restoring thread contexts. The application starts with a single thread.

User level thread

Advantages

  • Thread switching does not require Kernel mode privileges.
  • User level thread can run on any operating system.
  • Scheduling can be application specific in the user level thread.
  • User level threads are fast to create and manage.

Disadvantages

  • In a typical operating system, most system calls are blocking.
  • Multithreaded application cannot take advantage of multiprocessing.

Kernel Level Threads

In this case, thread management is done by the Kernel. There is no thread management code in the application area. Kernel threads are supported directly by the operating system. Any application can be programmed to be multithreaded. All of the threads within an application are supported within a single process.

The Kernel maintains context information for the process as a whole and for individuals threads within the process. Scheduling by the Kernel is done on a thread basis. The Kernel performs thread creation, scheduling and management in Kernel space. Kernel threads are generally slower to create and manage than the user threads.

Advantages

  • Kernel can simultaneously schedule multiple threads from the same process on multiple processes.
  • If one thread in a process is blocked, the Kernel can schedule another thread of the same process.
  • Kernel routines themselves can be multithreaded.

Disadvantages

  • Kernel threads are generally slower to create and manage than the user threads.
  • Transfer of control from one thread to another within the same process requires a mode switch to the Kernel.

Multithreading Models

Some operating system provide a combined user level thread and Kernel level thread facility. Solaris is a good example of this combined approach. In a combined system, multiple threads within the same application can run in parallel on multiple processors and a blocking system call need not block the entire process. Multithreading models are three types

  • Many to many relationship.
  • Many to one relationship.
  • One to one relationship.

Many to Many Model

The many-to-many model multiplexes any number of user threads onto an equal or smaller number of kernel threads.

The following diagram shows the many-to-many threading model where 6 user level threads are multiplexing with 6 kernel level threads. In this model, developers can create as many user threads as necessary and the corresponding Kernel threads can run in parallel on a multiprocessor machine. This model provides the best accuracy on concurrency and when a thread performs a blocking system call, the kernel can schedule another thread for execution.

Many to many thread model

Many to One Model

Many-to-one model maps many user level threads to one Kernel-level thread. Thread management is done in user space by the thread library. When thread makes a blocking system call, the entire process will be blocked. Only one thread can access the Kernel at a time, so multiple threads are unable to run in parallel on multiprocessors.

If the user-level thread libraries are implemented in the operating system in such a way that the system does not support them, then the Kernel threads use the many-to-one relationship modes.

Many to one thread model

One to One Model

There is one-to-one relationship of user-level thread to the kernel-level thread. This model provides more concurrency than the many-to-one model. It also allows another thread to run when a thread makes a blocking system call. It supports multiple threads to execute in parallel on microprocessors.

Disadvantage of this model is that creating user thread requires the corresponding Kernel thread. OS/2, windows NT and windows 2000 use one to one relationship model.

One to one thread model

Difference between User-Level & Kernel-Level Thread

S.N. User-Level Threads Kernel-Level Thread
1 User-level threads are faster to create and manage. Kernel-level threads are slower to create and manage.
2 Implementation is by a thread library at the user level. Operating system supports creation of Kernel threads.
3 User-level thread is generic and can run on any operating system. Kernel-level thread is specific to the operating system.
4 Multi-threaded applications cannot take advantage of multiprocessing. Kernel routines themselves can be multithreaded.

Pointer’s basic

What is a Pointer?

“A pointer is a variable which contains the address in memory of another variable“.

The unary or monadic operator & gives the “address of a variable”.

The indirection or dereference operator  *  gives the “contents of an object pointed to by a pointer”.

int var; //  declaration of “Integer type” variable name var which will have some                               // address  in memory

int *ptr; // To declare a pointer to a variable do

ptr = &var;   // copying address of variable “var” into new variable “ptr”

as I have shown in below example,

// C program to demonstrate use of * for pointers in C
#include <stdio.h>

int main()
{
// A normal integer variable
int var = 10;

// A pointer variable that holds address of var.
int *ptr = &var;

// This line prints value at address stored in ptr.
// Value stored is value of variable “var”
printf(“Value of var = %d\n”, *ptr);

// The output of this line may be different in different
// runs even on same machine.
printf(“Address of var = %p\n”, ptr);

// We can also use ptr as lvalue (Left hand
// side of assignment)
*ptr = 20; // Value at address is now 20

// This prints 20
printf(“After doing *ptr = 20, *ptr is %d\n”, *ptr);

// lets check value of var
printf(“Value of var = %d \n”,var);

return 0;
}

pointer.png

// A normal integer variable
int var = 10;

// A pointer variable that holds address of var.
int *ptr = &var;

// We can also use ptr as lvalue (Left hand
// side of assignment)
*ptr = 20; // Value at address is now 20

so, in above lines var is contenting 10 and address of var is assigned to ptr and as ptr is address of var so if change value at address will, change also as shown in above example.

ptr

Computer Memory and It’s Type?

Memory is internal storage areas in the computer system.

just like Our Human Body store “our day to day incidents” inside Our brain.

Computer/Mobile phone store information inside Memory.

The term memory identifies data storage that comes in the form of Silicon-chip’s ( pen-drives, SD cards), and the word storage is used for memory that exists on tapes or Hard- disks. Moreover, the term memory is usually used as a shorthand for physical memory, which refers to the actual silicon-chips capable of holding data.

Some computers also use virtual memory, which expands physical memory onto a hard disk.

Every computer comes with a certain amount of physical memory, usually referred to as main memory or RAM.

You can think of main memory as an array of boxes, each of which can hold a single byte of information.

So, A computer that has 1 megabyte of memory, therefore, can hold about 1 million bytes (or characters) of information.

Memory is major part of computers that categories into several types.

Memory is the best essential element of a computer because computer can’t perform simple tasks. The performance of computer mainly based on memory and CPU. Memory is internal storage media of computer that has several names such as majorly categorised into two types, Main memory and Secondary memory.

1. Primary Memory / Volatile Memory.

2. Secondary Memory / Non Volatile Memory.

1. Primary Memory / Volatile Memory:

Primary Memory also called as volatile memory because the memory can’t store the data permanently. Primary memory select any part of memory when user want to save the data in memory but that may not be store permanently on that location. It also has another name i.e. RAM.

Random Access Memory (RAM):

The primary storage is referred to as random access memory (RAM) due to the random selection of memory locations. It performs both read and write operations on memory. If power failures happened in systems during memory access then you will lose your data permanently. So, RAM is volatile memory. RAM categorized into following types.

  • DRAM
  • SRAM
  • DRDRAM

2. Secondary Memory / Non Volatile Memory:

Secondary memory is external and permanent memory that is useful to store the external storage media such as floppy disk, magnetic disks, magnetic tapes and etc cache devices. Secondary memory deals with following types of components.

Read Only Memory (ROM) :

ROM is permanent memory location that offer huge types of standards to save data. But it work with read only operation. No data lose happen whenever power failure occur during the ROM memory work in computers.

ROM memory has several models such names are following.

  • PROM
  • EPROM
  • EEPROM

Memory

RAM (random-access memory): This is  main memory. When used by itself, the term RAM refers “Random Access Memory” to read and write memory, that is, you can both write data into RAM and read data from RAM.

  1. SRAM ( Static Ram): Static random access memory uses multiple transistors, typically four to six, for each memory cell but doesn’t have a capacitor in each cell.
  2. DRAM( Dynamic Ram):Dynamic random access memory has memory cells with a paired transistor and capacitor requiring constant refreshing.
  3. DRDRAM (Direct Rambus DRAM (DRDRAM) ): Rambus is intended to replace the current main memory technology of dynamic random access memory (DRAM). Much faster data transfer rates from attached devices. Direct Rambus (DRDRAM) provides a two-byte (16 bit) bus rather than DRAM’s 8-bit bus. At a RAM speed of 800 megahertz (800 million cycles per second), the peak data transfer rate is 1.6 billion bytes per second.

ROM (read-only memory): Computers almost always contain a small amount of read-only memory that holds instructions for starting up the computer. Unlike RAM, ROM cannot be written to.

  1. PROM (programmable read-only memory): A PROM is a memory chip on which you can store a program. But once the PROM has been used, you cannot wipe it clean and use it to store something else. Like ROMs, PROMs are non-volatile.
  2. EPROM (erasable programmable read-only memory): An EPROM is a special type of PROM that can be erased by exposing it to ultraviolet light.
  3. EEPROM (electrically erasable programmable read-only memory): An E2PROM is a special type of PROM that can be erased by exposing it to an electrical charge.

This is in contrast to ROM, which permits you only to read data. Most RAM is volatile, which means that it requires a steady flow of electricity to maintain its contents. As soon as the power is turned off, whatever data was in RAM is lost.

  

 Memory Types Used in Micro-Controller’s/Micro-Processor’s

 As memory technology has matured in recent years, the line between RAM and ROM has blurred. Now, several types of memory combine features of both. These devices do not belong to either group and can be collectively referred to as hybrid memory devices. Hybrid memories can be read and written as desired, like RAM, but maintain their contents without electrical power, just like ROM. Two of the hybrid devices, EEPROM and flash, are descendants of ROM devices. These are typically used to store code.

Hybrids

As memory technology has matured in recent years, the line between RAM and ROM has blurred. Now, several types of memory combine features of both. These devices do not belong to either group and can be collectively referred to as hybrid memory devices. Hybrid memories can be read and written as desired, like RAM, but maintain their contents without electrical power, just like ROM. Two of the hybrid devices, EEPROM and flash, are descendants of ROM devices.

These are typically used to store code.

EEPROMs are electrically-erasable-and-programmable. Internally, they are similar to EPROMs, but the erase operation is accomplished electrically, rather than by exposure to ultraviolet light. Any byte within an EEPROM may be erased and rewritten. Once written, the new data will remain in the device forever–or at least until it is electrically erased. The primary trade-off for this improved functionality is higher cost, though write cycles are also significantly longer than writes to a RAM. So you wouldn’t want to use an EE-PROM for your main system memory.

Flash memory combines the best features of the memory devices described thus far. Flash memory devices are high density, low cost, nonvolatile, fast (to read, but not to write), and electrically re-programmable.

These advantages are overwhelming and, as a direct result, the use of flash memory has increased dramatically in embedded systems. From a software viewpoint, flash and EEPROM technologies are very similar. The major difference is that flash devices can only be erased one sector at a time, not byte-by-byte. Typical sector sizes are in the range 256 bytes to 16KB. Despite this disadvantage, flash is much more popular than EEPROM and is rapidly displacing many of the ROM devices as well.

The third member of the hybrid memory class is NVRAM (non-volatile RAM). Nonvolatility is also a characteristic of the ROM and hybrid memories discussed previously.

However, an NVRAM is physically very different from those devices. An NVRAM is usually just an SRAM with a battery backup. When the power is turned on, the NVRAM operates just like any other SRAM. When the power is turned off, the NVRAM draws just enough power from the battery to retain its data. NVRAM is fairly common in embedded systems.

However, it is expensive–even more expensive than SRAM, because of the battery–so its applications are typically limited to the storage of a few hundred bytes of system-critical information that can’t be stored in any better way.

Type Volatile? Writeable? Erase Size Max Erase Cycles Cost (per Byte) Speed
SRAM Yes Yes Byte Unlimited Expensive Fast
DRAM Yes Yes Byte Unlimited Moderate Moderate
Masked ROM No No n/a n/a Inexpensive Fast
PROM No Once, with a device programmer n/a n/a Moderate Fast
EPROM No Yes, with a device programmer Entire Chip Limited (consult datasheet) Moderate Fast
EEPROM No Yes Byte Limited (consult datasheet) Expensive Fast to read, slow to erase/write
Flash No Yes Sector Limited (consult datasheet) Moderate Fast to read, slow to erase/write
NVRAM No Yes Byte Unlimited Expensive (SRAM + battery) Fast

MemoryMap

MemoryMap

When working with micro-controller’s, many of the tasks usually consist of controlling the peripherals that are connected to the device, respectively programming the subsystems that are contained in the controller (which by itself communicate with the circuitry connected to the controller).

The AVR series of microcontrollers offers two different ways to perform this task:

  • There’s a separate I/O address space available that can be addressed with specific I/O instructions that are applicable to some or all of the I/O address space (in, out, sbi etc.), known as I/O mapping.
  • The entire I/O address space is also made available as memory-mapped I/O, i. e. it can be accessed using all the MCU instructions that are applicable to normal data memory. The I/O register space is mapped into the data memory address space with an offset of 0x20 since the bottom of this space is reserved for direct access to the MCU registers. (Actual SRAM is available only behind the I/O register area, starting at either address 0x60, or 0x100 depending on the device.).

This is why there are two sets of addresses in the AVR data sheets. The first address is using I/O mapping and the second address in brackets is the memory mapped approach.

Registers

Registers are special storages with 8 bits capacity and they look like this:

7 6 5 4 3 2 1 0

Note the numeration of these bits: the least significant bit starts with zero (20 = 1).
A register can either store numbers from 0 to 255 (positive number, no negative values), or numbers from -128 to +127 (whole number with a sign bit in bit 7), or a value representing an ASCII-coded character (e.g. ‘A’), or just eight single bits that do not have something to do with each other (e.g. for eight single flags used to signal eight different yes/no decisions).
The special character of registers, compared to other storage sites, is that

  • they can be used directly in assembler commands,
  • operations with their content require only a single command word,
  • they are connected directly to the central processing unit called the accumulator,
  • they are source and target for calculations.

There are 32 registers in an AVR. They are originally named r0 to r31. However certain instructions will not work in registers r0 to r15 e.g. ldi will only work on registers r16 to r31.

 

von Neumann architecture and Harvard architecture

Harvard architecture has separate data and instruction busses, allowing transfers to be performed simultaneously on both busses.

von Neumann architecture has only one bus which is used for both data transfers and instruction fetches.

therefore data transfers and instruction fetches must be scheduled – they can not be performed at the same time. but It is possible to have two separate memory systems for a Harvard architecture. 

Following are the difference between harvard architecture and von-neumann architecture

Harward Architecture     Von Neumann Architecture
Screenshot (2).png     vv
The name is originated from “Harvard Mark I” a relay based old computer. It is named after the mathematician and early computer scientist John Von Neumann.
It required two memories for their instruction and data. It required only one memory for their instruction and data.
Design of Harvard architecture is complicated. Design of the von Neumann architecture is simple.
Harvard architecture is required separate bus for instruction and data. Von Neumann architecture is required only one bus for instruction and data.
Processor can complete an instruction in one cycle Processor needs two clock cycles to complete an instruction.
Easier to pipeline, so high performance can be achieve. Low performance as compared to Harvard architecture.
Comparatively high cost. It is cheaper.

 

 

RISC and CISC Architecture

What is RISC?

A reduced instruction set computer is a computer which only uses simple commands that can be divided into several instructions which achieve low-level operation within a single CLOCK cycle, as its name proposes “Reduced Instruction Set”.

Example: Apple iPod and Nintendo DS etc.

What is CISC?

A complex instruction set computer is a computer where single instructions can perform numerous low-level operations like a load from memory, an arithmetic operation, and a memory store or are accomplished by multi-step processes or addressing modes in single instructions, as its name proposes “Complex Instruction Set ”.

Example:IBM 370/168, VAX 11/780, Intel 80486 etc.

Difference Between CISC and RISC

The main difference between RISC and CISC is in the number of computing cycles each of their instructions take. The difference the number of cycles is based on the complexity and the goal of their instructions.

Architectural Characterstics Complex Instruction Set Computer(CISC) Reduced Instruction Set Computer(RISC)
Instruction size and format Large set of instructions with variable formats (16-64 bits per instruction). Small set of instructions with fixed format (32 bit).
Data transfer Memory to memory. Register to register.
CPU control Most micro coded using control memory (ROM) but modern CISC use hardwired control. Mostly hardwired without control memory.
Instruction type Not register based instructions. Register based instructions.
Memory access More memory access. Less memory access.
Clocks Includes multi-clocks. Includes single clock.
Instruction nature Instructions are complex. Instructions are reduced and simple.