A Report On Multithread Programs Computer Science Essay

Published: November 9, 2015 Words: 2180

The threads are inside the process and part of the software with use of CPU to execute several complex and simple programs. According to need of sources multithreads are used for different execution level of programs. Multithreads can be exist within same processes can share memory. The main aim of these multithread programs is to execute multi tasks within a short time and the memory address space is reduced by executing a program with the help of multithreads.by using multithreads which brings higher level of response to the user and memory storage is less this leads cost reduce to the user. Using the multithread programs we can execute the programs quickly, the rate of execution is fast compare with normal execution. The applications can be executed simultaneously can cause time saving and memory storage less. These threads are divided into two types such as user level and kernel level threads. In case of user level thread only the developer can see and managed but kernel level threads can only manage the OS. These multithreads associates with some challenges like race conditions, deadlocks and resource sharing, execution errors etc.

Multithread programs investigation:

The effective mechanism for parallelizing irregular programs is Speculative multithreading (SPMT). Through allowing multiple threads execution in the presence of ambiguous data and control dependencies, the thread-level parallelism is exploited effectively. Thread partitioning method for prophet is based on the weighted control flow graph (WCFG). To partition the WCFG into sub-graphs, some structural analysis and some rules are used and each sub-graph represents a thread. For exploiting parallelism of general-purpose programs, one of the most effective ways to improve the performance of procedures is using speculative threads. Instruction level parallelism (ILP) which is a traditional technology limited by instruction window size, clock cycle and memory latency. Executing multiple threads from same program in parallel using multiple processing elements on a single chip is a Thread level parallelism (TLP). In the presence of ambiguous data and control dependences, speculative multithreading (SPMT) is an effective mechanism for exploiting TLP from general purpose programs.

In prophet thread model, sequential program is partitioned into multiple speculative threads and each speculative thread executes different parts of the program. There is one non-speculative thread and remaining others are speculative threads. The verification of result of speculative threads is done before committing or withdrawal. by the control flow analysis ,the candidate threads are recognized as a pairs of Control quasi -independent points (CQIP) . By data dependence model and series of heuristic roles, the candidate threads are evaluated and only those candidate threads are spawned which meet the requirements. A spawn point (SP) is defined in its parent thread for the candidate thread which can be spawned to provide the position. For each speculative thread, the prophet compiler constructs the pre-computational slice (p-slice) to predict values corresponding to inter-thread data dependences. P-slice of thread should be lightweight. It provides the key dependent value of that thread and it is triggered at run-time as an auxiliary of its corresponding thread. It is a simplified version of its parent thread.

Program Profiling technology and a top-down structural analysis are used in thread partitioning method. Branch probability, number of dynamic instructions in loops and procedures, the iteration times of loop region and procedure call overhead, this information is provided by the profiling process. Then the weighted control flow graph (WCFG) is built. Structural analysis traverses the WCFG of each procedure, identifies the loop regions and partitions them.

In prophet, the pre-computational slice (p-slice) reduces the inter thread data dependencies.

Multi threaded Programming in DCE Applications

RPC and multi threading are mainly used in DCE (Distributed Computing Environment) for communication synchronization in address space. The main problem with problem with RPC is that it creates blocking when executed. Introduction of multi threading to deal with the blocking problem allows programmer to produce multi threads in address space that can do many functions. While one thread in execution is waiting for RPC reply or any other, other thread can do useful tasks there by combination of both leads to increase through put of application and performance.

However multi threading is not useful in all application. Example CPU performance bounded applications leads to decrease when multi threaded programs run on single processor there by less efficient.

DCE (Distributed Computing Environment)

Distributed Computing Environment provides many services for the development and the use transparent distributed system using client server. The main goal of DCE is to provide application level interoperability and portability in different platforms using common interfaces. DCE uses RPC communicate and multi threading in address space for parallelism. It uses directory and name servers to give client server location transparency .

DCE lies between distributed application and the OS and the network services. DCE client application generates a request service using DCE procedures. DCE then uses OS and network services to transfer that request to server and to transfer back the result of remote computation back to client. Fig 1 shows the block diagram of DCE

Threads

The DCE multi threaded services enables many, simultaneous controls flow of execution in address space. The major advantage is increase in through put and efficient use of system resource ()

Both models have stack, code, heap, static and register together allocated for each thread. This leads to have both specific stack register data and shared heap - static in single address space.

In client - server model by using threads, server application can serve many clients simultaneously. By default DCE servers are multi threaded. Client uses threads to make many parallel requests to server. Each thread proceeds independently by using its own stack space and registers timely synchronizing with each other. Sharing heap and stack data as necessary.

Threads Implementation

Implementation can be done in two ways either in user or kernel space. Most of the DCE threads implementation is done in user space. Threads are the basic units for scheduling and resource allocation by OS.

Thread API's

The DCE threads come with basic set of functions to create, administration and synchronization. The basic operations can be classified into following groups.

- Synchronise: allows multiple threads to communicate with each other

- Signal handling: traps and transfers OS signals

- Thread specific data: lets to have its own global data structure version

Threaded programming

Consider the sample code for coarse - grain locking. The code contains a series of operations on a db.

db_access_thread()

{ ...............

Pthreaded_lock_glob_np(); /* get global lock function */

Open_db(db_name);

I_db_operation();

................

Final_db_operation();

Close_db(db_name);

Pthreaded_unlock_glob_np(); /* release global lock */

The above code is not optimised and enhanced readability and omits of some variables declaration, condition etc.

Synchronization Granularity

The main issue of multi threaded programming is synchronization within threads for proper interaction.

DCE threads have 2 synchronization object types they are mute i.e. mutual exclusion and known as lock. They ensure the integrity of shared resources. They are different types of sync. They are coarse, fine and intermediate sync.

#define LOCK(X) if(pthreadedmutexlock (&X) == -1)

{

Print(“error: cannot lock mutex”)

Exit(-1);

}

#define UNLOCK(X) if(pthreadedmutexlock (&X) == -1)

{

Print(“error: cannot unlock mutex”)

Exit(-1);

}

db_access_thread()

{

Pthreabmutex_t db_mutex;

................

Lock(dbmutex); /* lock db access */

Open_db(db_name);

I_db_operation();

................

Final_db_operation();

Close_db(db_name);

Unlock(dbmutex); /* unlock db access */

}

The above code was the example of fine synchronization.

Consider a typical example of boss/worker model. A boss thread creates ‘n' worker threads, each of which will execute an RPC call to its allocated server. The worker thread that completes the last RPC signals a condition variable to inform the boss thread that work is complete.

typedef struct C /* struct is a general mechanism to pass */

handle_t binding_handle; /* several parameters to a newly */

} thread_args /* created thread*/

int ctr; /* counter & Number of running threads */

pthread-mutex_t sync /* mutex to update structure members */

pthreabcond-tsync-cv: /* CV to signal boss thread */

main() {

thread-args t-arg [MAX-THREADS]; /* array 0f structures */

pthreadpt t-id [MAX-THREADS]; /* array of thread ids */

........ /* get server binding info */

LOCK(&sync) /* first lock the mutex for the wait call */

for ( ctr = 0; ctr < N; ctr++ ){ /*ctr is shared by all threads*/

t-arg[ctr].binding_handle = servers[ctr];

t-arg[ctrl.running = &ctr;

pthread-create(&t-idjctr]. pthread-attr-default,

RPC-FUNCTION, At-argtctr]);

}

while( ctr != 0) /* wait for all threads to stop */

pthread~cond-wait(&sync_cv, &sync);

UNLOCK(&sync);

. . /* remainder of application code */

RPC-FUNCTION ( arg ) .

thread-args *arg; {

. . . . /* make RPC call and other operations */

LOCK(&sync); /* prevent other threads from decrementing */

ctr -= 1; /* decrement running threads count */

if(ctr == 0) /* is this the last thread */

pthread-cond-signal (&sync-cv); /* wake up main thread*/

UNLOCK(&sync) ;/* let other worker threads run */

Pthread exit(0);

}

MULTITHREADING PROGRAMMING IN JAVA

Java is built in multithreading programming; multithreading program runs concurrently with two or more parts. Each parts of program are thread. Multithreading acts in the form of multitasking. It is define as a process. The process contains memory space by operating systems contain one or more threading. A thread cannot exist on its own.

Life cycle:-

New: - the new thread is a starting stage called new state. The program starts with thread

java thread

And is also called born thread.

Runnable: -

after creating the born thread now the thread is runnable. And thread is executing its task.

Waiting: -

sometime a thread is waiting to perform another thread at task. A thread transitions back to the runnable state only when another thread signals the waiting thread to continue executing.

Timed waiting: -

the runnable thread wait for an interval of time and transition back to the runnable state, when the time expires.

Terminated: -

the final stage where the thread complete the entire task.

Extending Thread Class:-

Class Thread extends Thread {

Public void run () {

// process Threads

}

}

Runnable Interface:-

Class Runnable implements Runnable {

Public void run () {

// process Threads

}

}

Starting a Thread:-

Thread TX = new Thread ();

TX. Start ();

Starting a Thread:-

Runnable ry = new RunnableY ();

Thread ty = new Thread (ry);

TX. Start ();

Run () method:-

Public void run ();

Thread Constructor:-

Thread ()

Thread (Runnable r)

Thread (Runnable r, string s)

Thread (String s)

Threads:-

Thread a separate stream of execute that take simultaneously and independency of everything else that might be happening. A Thread is a program that starts at point X and executes until it reaches point Y. without thread the entire program can be held up by one CPU intensive task or one infinite loop. Implementing threading is difficult to implementing multitasking in an OS. java allow a thread to put locks on shared resources so that while one thread is using data no other thread can touch that data this is done with synchronizations.

Java threads:-

In Java, thread creation and management is done within the program in 2 forms:

Ø Extending a class where a child class inherits methods and variables from a single parent class as the most common form of creating Java threads.

Ø Interfaces allow programmers to create an abstraction of future implementation of classes. This abstraction sets the stage while implemented interface classes perform the actual tasks while following the same sets of rules enforced by the interface

Threads and the java language:-

Instantiate an object of type Thread called subclass and send it the start () message. The method of each thread's acts like contained in its run () method. A run method is same as main () in a traditional program a thread will continue running until run () returns, at which point the thread dies.

INPUT STREAMS:-

Static Thread current Thread () Returns a reference to a thread object that represents the invoking thread

Long get ID () Returns a thread's ID

Final Boolean is Alive ( ) Determines whether a thread is still running.

Void run ( ) Entry point for the thread.

Final Boolean is Daemon ( ) Returns true if the invoking thread is a daemon thread.

Final void set Daemon (Boolean how) If how is true, the invoking thread is set to daemon status.

Final String get Name () Obtains a thread's name

Final int getPriotiy () Obtains a threads priority

Thread. State get State () Returns the current state of the thread

Static Boolean holds Lock (Object obj) Returns true if the invoking thread has been interrupted

Final void set Name (String thrdName) sets a thread's name to thrdName.

Boolean is Interrupted ( ) Returns true if the thread on which it is called has been interrupted.

Final void join ( ) Waits for a thread to terminate.

Void start ( ) Starts a thread by calling its run ( ) method.

Static void yield ( ) Yields the CPU to another thread.

Final void set Priority (int level) sets a thread's priority to level.

Static void sleep (long milliseconds) suspends a thread for a specified period of milliseconds.

Conclusion: