Security in large networks using mediator protocols

Published: November 21, 2015 Words: 9720

Itroduction

We use implicit quantum cryptography (hidden) and explicit quantum cryptography (opened) in this project. We can get the authenticated guarantee communication among sender and receiver by unifying the implicit quantum cryptography (3AQKDP) and explicit quantum cryptography (3AQKDPMA). QKDPs (Quantum Key Distribution Protocols) utilize quantum mechanisms in quantum cryptography to distribution of session keys as well as public negotiations on the way to verify in favour of eavesdroppers as well as to make sure the correctness of a session key.

However, among the sender and receiver the public negotiations need extra communication rounds. The advantage with this quantum cryptography is it easily defends the rerun attacks and passive threats.

A 3AQKDP in the company of implicit user authentication, which guarantee that confidentiality is just achievable for the genuine users and interactive authentication will get just following the safe communication with the session key start. The 3AQKDP (Implicit Quantum Key Distribution Protocol) consists of the following phases. They are setup phase, distribution phase. These two phases will give three party authentications by means of protected session key distribution. Usually in this system there will be no common understanding among sender and receiver. But here sender and receiver together should correspond over the trusted centre. The 3AQKDPMA (Explicit Quantum Key Distribution Protocol) too consists of two phases they are 1) setup phase as well as 2) distribution phase. But unlike the implicit quantum key distribution in explicit quantum key distribution there will be common understanding among sender and receiver. Sender as well as receiver must be in touch straight away with the verification of trusted centre (TC). If we use the 3AQKDP and 3AQKDP processes individually we have one disadvantage. I.e. we can recognize the security threads in the message but it does not spot the security threads in the session key if we use these process seperately.

Software, information, hardware, actions and the people who against illegal use of login or natural failures are the computer based resources. And protecting these resources is called as system security.

System security is of five types. And they are

a) System Security

b) Data Security

c) System Integrity

c) Privacy

d) Confidentiality

SYSTEM SECURITY:

To defend intentional or unintended damage to data from a defined hazard we should apply the technical creations and actions to the OS(operating systems) and also to the hardware so that we can defend the threat.

DATA SECURITY:

Normally, some attackers will try to steal the data and will try to alter the actual data or they wants to ruin the actual data and data to disclosure. The data security will provide the security to these attacks to prevent the data.

SYSTEM INTEGRITY:

There are such attacks called wiretapping, eavesdropping. To protect these attacks system integrity will perform function of power to all the programs in system and also system hardware and also it will take care about the physical security.

PRIVACY:

Privacy describe what data the clients want to send to others or to get the information from others, and also what information the organization wants to send or receive from other organizations and how the clients or firms can secure their information. So the privacy is nothing but securing the private matter of the users and organizations.

CONFIDENTIALITY:

Showing the position of important data in a database to reduce the achievable attack on the privacy data is called confidentiality. It shows the importance of the data that shows its importance of the defense of the attacks.

Introduction

1.1 About the Project

Key sharing protocols are used to assist distributing private session keys among the participants in communication networks. Protected communication between the sender and receiver is achievable on anxious public networks with these distributed session keys. Conversely, in badly created key distribution protocols many troubles will arise. For example, from the key sharing procedure a nasty attacker can evolve the session key. The genuine user does not know whether the received session key is right or wrong. And also the genuine user cannot prove the uniqueness of the other user. In communication security the designing secure key distribution protocols is a top precedence. The TC (trusted centre) provides a distributed session key to the both users in some key distribution protocols. There are totally three parties mixed up in this process. They are the two users and one trusted centre (TC). These three parties actively participate in session key transferences. These three protocols they are sender, receiver and one trusted centre (TC) are together called as three party key sharing protocols. But where as in two party protocols two parties are involved they are sender as well as receiver. These two sender and the receiver are pertained in session key exchanges. The three party key sharing protocols make use of challenge response mechanisms or timestamps in the classical cryptography.

On the other hand, challenge response mechanisms need minimum two communication rounds. These two communication rounds involve among the trusted centre and the two users. And also there is a timestamp approach in challenge response mechanism which desires the hypothesis of clock synchronization. This clock synchronization is not helpful in the distributed systems. Because of possible attacks and also changeable nature of network delays this clock synchronization is not helpful. In addition the existence of inactive attacks cannot be noticed in classical cryptography. These attacks are eavesdropping etc. Similarly, the quantum channel which is used to get rid of the eavesdropping attack so the attacks rerun. To the TC (Trusted centre) , to the decrease the total rounds of different protocols this fact is used. This is based on the challenge response mechanisms. And in this we do not use only three party verified key sharing protocols. QKDPs (Quantum Key Distribution Protocols) determine quantum mechanisms to allocate session keys and prove the rightness of the session key and public negotiations to ensure for the eavesdroppers. In between the sender and receiver the public discussions need more communication rounds. In distinguish to this; the classical cryptography gives the well-suited techniques. These suitable techniques that allows capable key verification and user authentication. In the earlier projected QKDPs, they are the security proof, hypothetical design and material functioning.

To allocate a session key safely among genuine users, the Bennett and Brassard have projected the three essential hypothetical designs in work the hesitation of the quantum measurement1 and 4qubit states. To create a session key among genuine users the Bennett used two no orthogonal qubit states. Based on EPR pairs (Einstein- Podolsky- Rosen) the quantum key distribution protocol offered by Ekert, which needs recollections to protect qubits of the genuine participants. Even though, to create a session key without primarily distributing secret keys it agrees genuine users and for this it do not necessitate a trusted centre(TC). The security is maintained and planned is based on the guess of the authenticated users. However, These protocols might be suffer by the man in the middle attacks without any guesses by the participants.

Tailored quantum cryptography protocol planned by the hwangetal that needs each duo of users to distribute a secret key before to selection by measuring bases. On the other hand, to confirm the exactness of the session key the users have to do the public negotiations. Each user and the trusted centre (TC) want to share before series of EPR pairs than the secret key. The three party Quantum Key Distribution Protocol projected for the above requirement. Accordingly, Einstein- Podolsky- Rosen(EPR) pairs are calculated and addicted, and the trusted centre(TC) and the user should reconstruct after one quantum key distribution execution.

1.2 Benefits of Three Party Authentications for key Distributed Protocol using Implicit and Explicit Quantum Cryptography

By using implicit quantum cryptography (hidden) and explicit quantum cryptography (opened). There are some advantages of three party verifications for key sharing protocol. The basic idea of mixing the two hidden and opened quantum cryptography can be use to confirm the session key from the three parties trusted centre, sender and receiver. Which progress the confirmation and also for the protected communication. In session key confirmation it also recognizes the security. It can keep away from noise in message broadcast by spotting the size of the bytes sending from one user to another user over the network. This is the main advantage of this project. It also get rid of the extra byte content in the network

2. Organization Profile

2.1Company Profile

In Nortel solution, we will give all the software solutions. To figure out their ready for reaction advantages, we will do our customer technologies and changes in industry. Nortel solution (P) Ltd is established in 2006 and it is a software and it will provide the service for the business organizations to setup, maintain their organization unsafe software more successfully.

Nortel solution (P) Ltd facilitates all the activities from small to large, and it makes use of a mixture of services, software industries and proprietary software. The speed, steadiness and clearness of the service contributor by the software enterprises and information technology service contributor for the very less money. But many industries will suffer many delays , problems with distribution of products , problem with money issues, lack of software support saved on desktop of the computer, risks of the remote devices and also the risks with the servers, and the Nortel solutions (P) Ltd will do the all these requirements and help the firms to avoid these problems. Fast, touch free operations, upcoming software advances, tools security patches and attaches , records of technology quality and tracking of technology, administration of policies, curative of application by our self and optimization of software certification, all these contained in the computerized clarification. Software results will be given by this Nortel solution (P) Ltd. And also it deals with the organizations ongoing work and customers technologies which will give the good figure in the market.

2.2 About The People

In a team all the employs will have a obvious visualization. The team members will make that dream realize with their ability and effort. There are different types of fields some are control systems, logical controller which is programmable networking with c, networking with c++, networking with java, visual basic (VB), visual C++ (VC++), Linux concepts of operating system (OS), micro controllers and embedded systems, interfacing using c programming language, design and also the implementation of very-large-scale integration (VLSI), in java programming language client and the server technologies they are jave2 enterprise edition(J2EE) and java platform micro edition (J2ME) and java platform standard edition (J2SE), enterprise java beans (EJB) to give a real time clarification to all these fields a team members should work for 30,000 hours of knowledge and this is the numerical estimation.

2.3 Our Vision

We can make achievable the visualization dreaming and making understand it is our main aim.

2.4 Our Mission

We have done balanced the building and making the right processes with the standard of universal and we have given maximum, high and good quality, consistent, services which are high in value and for all the customers in world providing consistent, low prices products of Information technology.

3. System Analysis

3.1 Existing System

To prevent the repeat attacks, three party distribution protocols in classical cryptography exploit challenge response mechanisms or timestamps.

Conversely, among the three parties one trusted centre and two users minimum two communication rounds needed by the challenge response mechanisms. In distributed systems clock synchronization is not realistic, but the timestamp method requires the hypothesis of clock synchronization. It is because of volatile nature of network delays, possible unfriendly attacks.

Moreover, the existence of inactive threads such as eavesdropping cannot be found by the classical cryptography. Based on test reply mechanisms the above truth can be used to decrease the total rounds of other protocols to a trusted centre and unlike some uses only three party confirmation key sharing protocols.

3.2 Limitations of Existing System

The individual use of the processes 3AQKDP and 3AQKDPMA is to give the message confirmation only and also in that message it spot the security threads but in the session key it does not recognize the security.

3.3 Proposed System.

Quantum mechanisms used by QKDPs in quantum cryptography to share the public negotiations and session keys to verify for eavesdroppers and make sure the session key rightness. Nonetheless, more communication rounds needed by the public negotiations among sender and receiver and cost is expensive qubits. In dissimilarity to this, suitable techniques that allow well organized key authentication and user authentication, and these suitable techniques provided by the classical cryptography.

Quantum key distribution is split into two types. They are

* The proposed 3AQKDP

* The proposed 3QKDPMA.

3.3.1 The Proposed 3AQKDP

In this type of quantum key distribution it clarifies's the particulars of the 3AQuantum Key Distribution Protocols with the notations distinct in earlier sections. Suppose we imagine that each user distributes a secret key with the trusted centre in progress in any way either by straightaway or by indirect..

3.3.2 The Proposed 3QKDPMA

It is of two types.

* setup phase

* key distribution phase.

Setup phase: In this phase, the secret key is distributed before by the users with the trusted centre and concur to choose split bases of qubits which is based on pre distributed secret key.

Key distribution phase: In this phase, it demonstrate how the session key could be distributed by the two users Alice and Bob with the support of trusted centre(TC) and accomplish the open user authentication.

Citation: K.-Y. Lam and D. Gollmann, "Freshness Assurance of Authentication Protocols," Proc. European Symp. Research in Computer Security (ESORICS '92), pp. 261-271, 1992.

4. Problem Formulation

4.1 Software objectives:

The problem formulation is a mixture of the classical and the quantum cryptography. The classical cryptography is the existing system and the quantum cryptography is the proposed system. There are two three party quantum key sharing protocols. We use one with implicit (hidden) user authentication and the other with explicit (open) user authentication and these two are used to make verification using quantum mechanism.

Suitable techniques provided by the classical cryptography that allow well organized key confirmation and user verification only but it does not spot the eavesdropping attack. The superior key sharing protocol will develop the safety and authentication using classical cryptography and quantum cryptography.

4.2 Software Requirement Specification

At the conclusion of the analysis task the need of the software arrangement is formed. In system engineering part the presentation and role are allocated to it and these are advanced by creating a total information explanation as purposeful demonstration, a function of the system performance, a hint of presentation needs, suitable confirmation criteria and design constraints.

User Interface:

* Swing - Swing is nothing but a group of classes which give additional influential and flexible components that are likely with Abstract Window Toolkit (AWT).In adding more to the recognizable components such as labels and checkboxes. Swing gives various exhilarating additions, and in which tabbed panes, tables, trees and scroll panes are involved in the swing.

* Applet - Applet is an active program. This is not only active but also interactice program. We can execute an applet inside a web page displayed by a java proficient browser. For example these browsers are Netscape and hot java.

4.2.1 Hardware Interface

Ø Size of Hard disk : 40 GB

Ø Speed of RAM : 512 MB

Ø Speed of Processor : 3.00GHz

Ø Type of Processor : Pentium IV Processor

4.2.2 Software Interface

Ø Java Development Kit(JDK )1.5

Ø Java Swing

SQL Server or MS-Access

4.3 Software Description

About Java:

In java there are two things. They are a programming language and a platform. Basically java is a high level programming language and all the keywords stated below are the main.

High-performance Dynamic

Multithreaded Robust

Secure Architecture-neutral

Interpreted Simple

Secure Object-oriented

Portable Distributed

Every java program is both compiled and interpreted so the java is also abnormal. We interpret a java program into an intermediate language with the compilation is called as java byte codes and this code is platform independent after that this code instruction will be send and run on the computer

Compilation happens just once; interpretation occurs each time the program is executed. The figure illustrates how this works.

On a computer the instruction travels rapidly. The compilation takes place only once at a time but each and every time when the program executes then the interpretation occurs. The below diagram shows that how it works and can think about the java virtual machine. We can run the java applets by using a java interpreter which is a implementation of java. It can also be executed in the hardware. You can compile your java program into byte codes on my platform that has a java compiler. The bytes codes can run any implementation of the java virtual machine. For an example the same java program can run the Solaris, Macintosh.

Java is Independent of Platform:

Program which is executed in a software or hardware environment is said to be a platform. Compare to the other platforms Java platform is odd, which is executed it's a platform of software only, on the other side it's a platform for hardware based. Both operating system and hardware is a mixture as described by most of the other platforms.

There are two components in Java Platform:

1. JVM which is known as "Java Virtual Machine"

2. Java API which is known to be "Java Application Programming Interface"

As we all already know that JVM being introduced before in our documentation. JVM is defined as the java platform root in which different hardware-based platforms are ported.

Graphical User Interface (GUI) widgets are the software components which are provided by many functions and these are set to be in large collection in the Java Application Program Interface.

Related components are set into packages (libraries) in Java Application Interface. In the coming section we are discussing about the highlights which are provided by Java and their functionality in each area of the Java API package things can be done.

Software components used in packages which is used to provide functionality in a wide range of programming, and programs of these kinds are well supported by Java API.

API in the Java API is fully implemented in every platform is included.

The following features are given by the core API:

The Requisites: "Data Structures", I/0, "Objects", "Strings" and "Threads".

System Properties: In this it consists of time, date and so on.

Applets: Java Applets uses a deposit of conventions.

Networking: IP address and URLs of sockets of TCP and UDP are discussed.

Internationalisation: Users are localized which are helpful in writing the programs.

Programs all over worldwide can be adapted automatically to their respective locations and they are displayed in their language which is suitable.

Programming in Java:

* JVM

* Java Application Program Interface

* Program of Java

* Java Hardware

Hardware dependencies which are present in programming of Java are insulated from Virtual Machine and API. Java can be a slower when compared with their native code as we all know that it is of environment which is of platform independent.

Nevertheless, by the threatens the presentation of the programming in Java is brought closer to their native code by the interpreters of well-trained, and the compilers of "Just-in-time-byte-code" and smart compilers.

What are abilities of the Java?

Conversely, Java programming script writing is not for the WWW i.e.., World Wide Web which is used for entertainment. Java is termed to be a powerful in the platform of software, and a programming language in high level, and is general purpose language.

Networking:

In this section we are going to discuss about the multi-threaded in socket class about client/server. The developer decides the need for the thread in the programming, as thread is not mandatory for developer. We can find some Socket classes overhere and some other places which can be found in the Internet, but no one is going to provide any feedbacks inan event detection to your application which is done similar to this. This is going to provide the detection of events which are as follows like establishment of connection, drop of connection, termination of connection and reception of data which includes the packet size of 0 byte.

Explanation:

In this article we are going to discuss about the communication between UDP and TCP and these both are supported by the new Socket class. We can find some advantages when we compared with the other classes or Programming in Socket articles advantages when we compare. Primarily, any limitation is not required by this class as to be used in window handle which is need to be provided. If we want a console application which is simple this limitation is termed to be bad. Therefore such limitation doesn't have in this library. This is also going to provide automatically the support of the threading for you, in which it is going to handle the peer connection for socket and then peer disconnection of the socket. This is used to found the some options of the features in any socket which is not found till now in which I have seen in previous. Both the sockets like server and client are supported by this. Many connections are accepted by the socket which is termed to be Server Socket. And client socket is termed to be a socket which is connected to the server. Without the establishment of connection we can communicate between the two applications which can be used still in this class. If you want to create a two sockets of UDP server in the next case which is used for each application. Reduction of coding is needed in order to create applications like chat and Inter Process Communication (IPC) in this class which is in between the more processes or applications. TCP/IP which is error handled is also supported which is among two peers of consistent communications. The data which is to be transmitted for the destination can use the smart way of addressing the operation which can be controlled this is for only UDP. Only communications which are among the two peers are dealt with this TCP operation.

Network Client Server Analysis:

Stack of TCP/IP

OSI one is bigger than the TCP/IP

Protocol which is connection oriented is TCP, where as connectionless protocol is supported in User Datagram Protocol (UDP)

Datagram's of IP

Delivery system which is unreliable and connectionless is provided by this IP layer. The others are considered to be independent in each datagram. Higher layers are being supplied between the datagram for any association. Its own header is included by a checksum which is supplied by the IP layer. The addresses of destination and source are included in the header. Routing through an Internet is handled by the IP layer. This is liable for transmission of large datagram before that breaking into the smaller ones and then rearranging them at their respective destinations.

User Data Protocol (UDP)

This is also considered to be unreliable and connectionless. IP which is added to this is considered to the checksum for datagram and port number contents.

Transport Control Protocol (TCP):

This protocol is connection-oriented which is reliable and supplies a logic which is above IP. This uses two processes which is provided in virtual circuit and is used to communicate.

Internet addresses

First we have to find in order to employ the service. Address scheme is used by the Internet so that it can locate the machines, IP address is given by the 32 bit integer address. This is used in encoding the ID of a network and addressing more. Network address is sized as per the various classes which falls under the network ID.

Network address

8 bits are used by Class A for addressing the network and other 24 bits are left for addressing the others. 16 bits are used by Class B in addressing the network. 24 bits are used by Class C and 32 bits by Class D for network addressing.

Subnet address

In UNIX, the networks are parted into sub networks internally. In which we use only 10 bit addressing which allows different hosts like 1024 meanwhile taking 11 into consideration.

Host address

In our subnet the host address uses the size of 8bits. This space is used to set of 256 machines.

Port addresses

A port number used to identify the services which are already there in a host. The port size is about 16bit number long. In the process of sending a message to a server, first we should transfer the information to port where this service of host is executing.

Sockets

The socket is the data structure maintained as a result of coordination to grip the network connections. The socket is formed by the help of call socket. It produces a integer which is similar as a file descriptor.

ServerSocket

A ServerSocket listens in favor of the Socket ask for and performs message management functions, file sharing, database sharing functions etc.

JDBC

Within a attempt to locate an independent database standard API for java, sun Microsystems developed java database connectivity, or JDBC. It offers a general SQL database access mechanism which gives us a constant interface to a multiplicity of RDBMS. This constant type of interfaces are achieved during the exploit of "plug-in" database connectivity modules, if it wants to have a JDBC support, he or she should present the driver for each and every platform on which the java and database runs.

To expand a wider range approval of JDBC, sun based JDBC's framework on ODBC. This is exposed previously in this chapter. It has a bigger range of support on a multiple types of platform. Depending on this JDBC on ODBC will permit vendors to use JDBC drivers to market it is faster than rising the whole fresh connectivity solution.

JDBC Goals

Few software packages are designed without goals in mind. JDBC is one that, because of its many goals, drove the development of the API. These goals, in conjunction with early reviewer feedback, have finalized the JDBC class library into a solid framework for building database applications in Java.

Some of the packages are discovered without any aim and a perfect goal in mind. JDBC is one kind of software and because of its many goals drove the development of the API. This goals in coincidence among the past reviewer feedback, has been finalized the JDBC class library keep on concrete framework for constructing database application in java.

The goals' which we are located for the JDBC is important. Which gives us some approaching as such classes and functionalities perform the way as they do. The following are the seven design goals for the JDBC:

1. SQL Level API

They feel like the important thing they have to concentrate is to define SQL interface for the java. The data interface which is of low level is not achievable, and for the higher level of tools the level is enough low for API's to be created. In opposition for application programmer it is of high level for boldly use. For a potential tools vendors to "generate" JDBC code and also cover many JDBC,s from the end users.

2. SQLConformance

The syntax of SQL travel from database vendor to database vendor. In attempt to support a extensive variety of vendors. The JDBC tolerate several query statement to be approved from end to end for database driver. It helps the modules to grip non-standard functionality which is appropriate for the users.

3. JDBC should be implemental resting on common database interfaces

The JDBC SQL API should "sit" on top of supplementary general SQL level APIs. This type of goal allows JDBC to utilize presented ODBC level drivers with the exploit of a software interface. It can translate JDBC calls to ODBC and vice versa.

4. Provide a Java interface that is consistent through the break of the Java system

Since of Java's acceptance in the addict community consequently far, the designers believe that they must not drift from the present design of the core Java system.

5. Keep it simple

This goal possibly appear in every software design goal listings. JDBC is refusal. Sun feel that the design of JDBC must be exceptionally trouble-free, allow in favor of single one manner of finishing a assignment per mechanism. Allow duplicate functionality simply serve to puzzle the user of the API.

6. Use strong, static typing wherever possible

Strong typing allow in favor of additional fault examination which is to be completed at compile time; also, a smaller amount of errors become visible at runtime.

7. Keep the common cases simple

The usual SQL calls worn by the programmer are effortless SELECT's, INSERT's, DELETE's and UPDATE's, these queries must be uncomplicated to achieve with JDBC. Conversely, further complex SQL statements must moreover be possible.

5. System Design

5.1 Design Overview

1. Sender Module

There will be three parties participate in this process they are sender, receiver and trusted centre. First the user sends the secret key to TC (trusted centre) and then the secret key will be authenticated by the trusted centre (TC) to allow the client and the trusted centre will generate the session key otherwise the transmission of data will not be permitted by the user.

After the sender sends message to the trusted centre, the trusted centre will get the session key and it encrypts the whole message with that session key. And then the encrypted message is attached with the qubit, then the trusted centre will sends the entire encrypted message to the genuine receiver.

2. Trusted Center

When the sender sends message to the trusted centre, the TC will ask the sender to send secret key and then the trusted centre confirm whether that secret key belongs to that original user or not. If the secret key belongs to that user then the TC allow the user for the secure transmission of data.

The TC check whether the users secret key is correct or not to authenticate that user for secure data transmission. If the user permitted for the data transmission the trusted centre will generate one session key which is shared key among both sender and receiver in order to decrypt or encrypt the data. Usually the secret key size range can be 8bits. This shared secret key is created from fake random prime number and also it is random numbers exponential value.

After getting the shared secret key the trusted centre will generate the random string and then the trusted centre will chage that random string to hexadecimal code and then it changes that hexadecimal to binary code. We can see two binary numbers in that it select smallest binary value and obtain the quantum bit of 0 and 1.

The session key and the qubit will combine with each other and these both will depend on the permutation of qubit, they are

1) 1/√2(p[0] + p[1]) (if the values are 0 and 0)

2) 1/√2(p[0] - p[1]). (if the values are 0 and 1)

3) p[0] (if the values are 1and 0)

4) p[1] (if the values are 1 and 1)

The session key will be encrypted by the master key, this technique is called as hashing. And all the values will be copied to trusted centre storage.

In order to encrypt the data the user need the qubit and original session key. The key distribution will send the real session key and qubit to the sender. And it is also used to decrypt the encrypted message by the receiver by distributing the session key and qubit because to decrypt the received message the receiver should need the session key and qubit.

3. Receiver Module

The receiver will receive the encrypted data and also will get hashed session key and qubit, and then it checks with the trusted centre for qubit, if it is genuine it will create the master key and do the hashing of receivers session key and also hash the session key of sender and then compare the both session keys to verify whether both sessions keys are same or different. By this the key authentication will be improved.

Using the session key the encrypted message will be decrypted and then the receiver can see the actual message.

Citation: "Provably Secure Three-Party Authenticated Quantum Key Distribution Protocols," IEEE Transactions on Dependable and Secure Computing, vol. 4, no. 1, pp. 71-80, Jan.-Mar. 2007, doi:10.1109/TDSC.2007.13

3. Cryptography

To secure the information from harm by sending the whole data to a different format in which no one can read the contents of the data. Changing the data format to unreadable format is known as cipher text. Those who want to have the data they should have the secret key of that data to decrypt it to own that message. Translating the plain text into unreadable format (cipher text) is called as encryption. By using translation algorithm and translation table it changes the original information to unreadable form is called as enciphering. Deciphering is the process of changing the cipher text that means encrypted message into plaint text that means original message.

Symmetric key systems are largely classified by the cryptography systems. In this symmetric key system a single key is used by both sender and receiver for translating plain text into cipher text and decrypting the cipher text to plain text. Unlike symmetric key systems public key systems will have two keys they are public key and private key. In these two keys the public key is opened to everyone and the private key known to the only genuine end users. Here algorithm is used by every system in order to encrypt and decrypt messages. In this process the sender uses one key to encrypt the original text to cipher text and the receiver can use the same key which is used by the sender to decrypt the cipher text to the plain text and this entire process is known as symmetric key cryptographic algorithm. Blowfish and DES (data encryption standard) are the examples of symmetric key encryption algorithm. The sender translates the original message to cipher text with the use of public key of receiver in the public key encryption algorithm. But the receiver will have his own private key with him so the receiver can decrypt the cipher message with private key. RSA (Rivest, Shamir, and Adleman) and ECC (Elliptic Curve Cryptography) are the best examples of public key encryption algorithm.

In the security phase of multicasting the cryptography will play an important job. If we take stock data sharing group as an example in this group it gives all the stock data to some users in the world. This stock data information will be distributed to the users who have the right to use that service. But those users are not standard; the set of users will be changed. The users who are joining in the group must get the instantaneous information but they must not get the previously released information before they are joining the group. This because when someday if the user wants to terminate permanently from the group he does not get the information after they leave the group.

Citation: G. Li, "Efficient Network Authentication Protocols: Lower Bounds and Optimal Implementations," Distributed Computing, vol. 9, no. 3, pp. 131-145, 1995

5.2 DATA FLOW DIAGRAMS:

To illustrate and examine the movement of information through a system we use a data flow diagram because it is a graphical tool. Logical data flow diagrams are nothing but translation of information from Input to output and illustrated logically and with the system physical components associated with the system. We can see implementation of data among offices, users and departments by the physical data flow diagrams. All the data flow diagrams needed to explain a system.

Process that transforms data flow.

Source or Destination of data

Data flow

Data Store

DATA FLOW DIAGRAMS ARE 4 TYPES:

1. Current Physical

2. Current Logical

3. New Logical

4. New Physical

CURRENT PHYSICAL:

The process label contain the computer system name, name and position of the people in this current physical data flow diagram. It will provide label of system processing technology used to progress the data. Likewise data stores and data flows are named with real physical media to the actual data stored to. They are like computer files, computer tapes, file folders and business forms.

CURRENT LOGICAL:

The current system is reduced to quality of information and processors that they do actual physical form by discarding the system's physical aspects.

NEW LOGICAL:

If the user is fully satisfied with the present system's performance then this new logical also like a current logical but in this present system it have some problems like it has to developed with the new logical model and then it seems very different from the present logical model. While in this process inefficient flows found and functions will be added and some functions will be removed.

NEW PHYSICAL:

The latest system's physical implementation signified by the new physical data flow diagram..

RULES GOVERNING THE DFD'S

PROCESS

1) A Process has just outputs but no inputs.

2) A Process has just inputs but no outputs. In this case if an object has just inputs but no outputs then it will be a sink.

3) A process contains only the verb phrase label.

DATA STORE:

* From the one store to a different store the information cannot be transferred and this entire information should transfer by the process.

* It is not possible that the transfer of information to a information store from an outside source. This job is done by the process. A process which get the information and shift this information from the actual place to the data store.

* Phrase label is contained by the information store. Data store is also called as the information store.

SOURCE OR DESTINATION:

The sink means destination and source of the data.

1) Process should send the entire information to destination from the source because the information cannot transfer directly.

2) A phrase land contained by either source or destination.

DATA FLOW:

1) Among the symbols the information flow will take place only in one direction. Before being up to date among the process and a information store the information flow take place in two directions to illustrate the reading of the information. This flow of information showed by the different arrows because these will represent for two dissimilar types.

2) In data flow diagram there is a join which indicates that the similar type of information come from different processes and sends this information to a one single place.

3) Information may not move backward to the exact process, because sometimes the flow will be mislead. To handle the information flow we need another process to stop the flow of information to the earlier processes.

4) The flow of information to a data store means deletion or modification of the information.

5) And also get back of information is also a data flow from information store.

Authentication is nothing but a user receives a message it is the senders identity. There are two needs to key sharing and data sharing can be translated in the context of protected multicast.

Key authenticity: The session key is generated by the trusted centre.

Data authenticity: The users will be able to decide what type of the data they are receiving i.e. whether the data is genuine or not. Attackers will try to send the nasty data to the users in this situation the user should know what type of that data is. Actually the trusted centre sends the data to the user and at the same time the attacker try to send the nasty data to the user, in this situation the user should be aware of what kind of that data is i.e. by this cryptography the user will know what kind of the data.

5.3 Architectural Design:

Level1

This module explains about generation of secret key. Secret key is generated using random number. A random string is taken in order to concat secret key and random string. After this qubit generation over concatenated string. Hashing is done for the qubit which was generated for both random number and string. Finally hasing process completes the process of key generation process.

Level0

5.4 Use case Diagram:

Applications shows 4 modules.

a. Sender

b. Receiver

c. Trusted Center

d. Quantum Key Generation

Here sender will send a message to the trusted center where trusted centre will generate a session key, qubit and sends both of these to the sender and corresponding receiver. Now the sender will encrypt the entire message with the help of session key ands send to the receiver. Now the receiver will decrypt the encrypted message with help of the secret shared key. Finally the corresponding receiver can see the actual message.

5.5 Object Interaction Diagram:

The above diagrams explain overall implementation process of the application. Which consists of Sender, Receiver, and Trusted Center. Where Trusted Center got implemented Random Number generation, Session Key generation and creation of Qubits.

6. System Testing

The aim of the testing is to find out the errors. It is an impact of try to find out the error and the weakness in a utilize product. It supply us a way to control the functionality of components. The software system does not fail in the improper manner as the software system is reachable to its requirements. Here we use different types of tests. A specific testing requirement is analyzed to each type of test.

Types of Tests

6.1 Unit testing

To find out internal program logic is implemented properly we use a valid test case in which unit testing id involved. The test case input gives us a suitable output and all the codes, flow of data as well decision branches should be suitable for this testing. Individual software unit testing has to be done after the integration of the individual unit. The testing is done at a component level and test applications, specific business process. Perfectly to the documented specifications it guarantees each exceptional path of business process contains clear input and results.

6.1.1 Functional test:

The functions tested are presented precise by business and technical requirements, system documentation, user manuals which afford systematic demonstrations.

The functional testing is centered on different types of items:

Valid input: identified classes of suitable input have to be granted.

Invalid input: identified classes of suitable input have to be redundant.

Functions: identified function has to be exercise.

Output: identified classes of relevance outputs have to be redundant.

System\procedure: the system has to be invoked.

6.1.2 System testing:

It ensures that the complete integrated system meet necessities. For a guarantee knowable result it tests the configuration of the system. a simple example for this testing is the configuration oriented system integration test. It is basically depends on flows and process description.

6.1.3 Performance test:

In this test we have to see that the output from the testing should form within the time limit. To recover the result it should take the time for compiling and giving response to the users and the request should be send to the system.

6.2 Integration system:

This testing is the incremental testing for two or more integrated components

On a solitary platform to fabricate the failure caused by the interface defects. The main task of this system is to check that the software applications and the components.

Integration testing for Database Synchronization:

* To join in the group the new user will send the signup request to the group controller and it test this link.

* The link will be halted if the login client doesn't have the needed rights to get a screen.

* As the client requests for the message encryption and decryption, these action done in the client side.

6.3 Acceptance Testing:

In every project, the active involvement of the end user is much needed for the project. And the crucial phase in every project is testing of user acceptance. This testing also makes sure that all the functional requirements needed for the system.

Acceptance testing for Data Synchronization:

§ To change the database tables there are separate jobs assigned by the group controller.

§ To do the multicast functions the group controller will give some terms and conditions.

§ When the encrypted message is being send from one group to the other group the key creation center perform as a Router.

§ The groups need keys to encrypt and decrypt the message and these keys produced by the key generation center.

7. Implementation

The client authentication and key confirmation is very good in classical cryptography but where as in quantum cryptography, such threats called inactive threats and replay attacks can be easily refused. The classical quantum cryptography advantages and classical cryptography advantages will be combined and this will produce two Quantum key distribution protocols. These two protocols contributions are

1) By using these two protocols we can identify the eavesdropping, we can keep away from the replay threats very easily and we can prohibit the man in the middle threat.

2) Without any public negotiations in just single step we can able to perform the session key confirmation and client authentication.

3) The user and the trusted centre will use the secret key many times in this process.

4) These projected schemes are very safe in this implementation process.

Citation: C.H. Bennett and G. Brassard, "Quantum Cryptography: Public Key Distribution and Coin Tossing," Proc. IEEE Int'l Conf.

Computers, Systems, and Signal Processing, pp. 175-179, 1984

Quantum Key Distribution Protocols under the random oracle model:

The trusted centre and a user synchronize their division based on a preshared secret key. In the time span of session key sharing, to generate another key the preshared secret key combined with a random string to encipher the session key. If the same session key is retransmitted then the end-user cannot get the qubits. As a result, preshared secret key available for a long time and used again and again among users and trusted centre. A user can verify correctness of session key and user authentication is possible by the joint of quantum channel with classical cryptography techniques this will identify the eavesdroppers.

To illustrate the security of the proposed protocols the random oracle model is designed. Actually the random oracle model verify that when the attacker breaks the three party quantum key distribution protocols. This event used by the simulator to stop the underlying the atomic primitives. The proposed three party quantum key distribution protocols are protected if only the underlying primitives are protected.

To discover the correctness of security, computer software quality we use the software testing process. To get the information about the product the testing is applied to the stakeholders. In the execution process of a application or a program with the intension to find the errors in that program, because quality of the program is very important. So the testing can not find the exact correctness of the software. In a process of testing it compares the specification with state and behavior of the product. SQA (software quality assurance) not only include business process areas and also it performs the testing so it is distinguishable from the software testing. Because software testing performs only testing.

There are many techniques for the software testing in which some procedures just create the routine procedures and uses it, but we need an efficient testing of product and it is an investigation process. We can define the testing as follows to evaluate the product it need a process of questioning. To execute the product we perform some operations on the product. In reaction to these operations the product gives its behavior.

INTRODUCTION:

Here in common software engineer discriminate the fault of the software from the software failure. When it is failure the software doesn't want the user expects. A fault occurs when there is an error in the program. Possibly in fact it can evident the failure. It can also be known as an error during the correctness of the computer program. A fault turns into a failure if the accurate computation situation is met, and sometimes the program which has an error executes on the CPU. It can also rotate as failure when a different type of compiler is used to port or else sometimes when it becomes comprehensive. It is also a technical investigation below test to present stakeholders among worth related information.

The software testing exits in parallel it might be viewed as the subfield quality assurance. The auditors and the software process specialists obtain a broader observation on software and its own development. To decrease the quantity of the fault they observe and then modify the software engineering to finish up in code or to deliver it faster.

A stage of regulation implicated the beloved result of testing is a intensity of self reliance in the software because of this the association is convinced that the software we are using has a good enough defect rate. What constitutes that a good enough defect rate is totally depends on the nature of the software. A video game known as the arcade video game is planned to imitate flying an airplane have a much elevated tolerance in favor of defects than the software want to organize an actual airliner.

A crisis amid software testing shows that the amount of defects in the product and the amount of configuration might be very large. Bugs which take place once in a while are tricky to find out in the testing. A system which is predictable to role lacking of fault for a definite length of time have to be previously tested mostly length of time. It may have strict consequences for projects to note down long-lived reliable software.

A general observe of a software is to perform by an independent group of testers and later than the functionality is to be developed but sooner it is shipped to the consumer. This result may cause of project delays which is used as a project buffer in the software testing. A different method to begin a software testing is when at the identical instant the project begins and nonstop until the project ends.

It is frequently understood that previously a defect is establish the cheaper it is to fix it.

In counterpoint a quantity of rising software discipline such as tremendous programming in additional to the alert software development movement; adhere to a "test-driven software development" model. Within this development unit are printed first, by the programmers. Itinerary these test doesn't succeed generally as they are estimated to. As when the code is printed it passes incrementally better portion of analysis suite. The test is constantly restricted as a new failure conditions and curve cases as exposed and are included with any failure tests that are developed. Unit tests are maintained beside by mean of the rest of the software code and normally included into the build process. The software tools, sample of data output and input and the configurations are all referred to communally as a test exploit.

History:

The partition of debugging since testing was in the beginning introduced by Glenford J. Myers in his 1978 book the "Art of Software Testing". Even though his concentration was on breakage testing it illustrate the aspiration of the software engineering community to split elementary development behavior, such as debugging commencing that of cob oration. Drs. Dave Gelperin and William C. Hetzel classified in 1998 the phases and goal in software testing are as follows:

Until 1956 it was the debugging oriented phase, where testing was frequently connected to debugging: there is no variation among debugging and testing. From 1957-1978 there was a concealment oriented period where testing and the debugging is established as shown in the period. The time between 1979-1982 was declared as the destruction oriented period the main aim is to find the errors. 1983-1987 is classifying as valuation oriented period: since 1998 it has been discovered as the avoiding oriented period as the test is to express that the software satisfy its requirements from this we can prevent and detect all the faults from the system. Dr. Gelperin chaired the IEEE 829-1988 with Dr. Hetzel writing the book "The complete guide of software testing". Both the works stay as a consistent source of reference. Dr. Gelperin and Jerry E. Durant in addition expand high impact inspection technology that builds traditional inspections however utilize as a test driven additive.

White-box and black-box testing:

The white box and the black box testing are expressions which are used for an engineer in a point of view when manipulating test cases. The two tests are defined in the form of view as the white box is known as the external view and the black box is known as the internal view. Software testing is somewhat sensitive but it is largely methodical. To observe that the good testing is working properly it is much more involved rather than running the program a small amount of times. Software testing is the development of executing software in a well controlled manner; in regulate to respond all the questions."Does this software behave as specified?"With the help of validation and the verification the testing is used. The verification is nothing but checking the items in which software, consistency is involved. But testing is one of the important items in the verification it also use inspections, walkthrough. What the user in fact required during the system testing is checked by the validation.

* Validation: Are we responsible for the current job?

* Verification: Are we responsible for the job exact?

In regulate toward accomplishing consistency in the testing style, it is essential to contain and track a deposit of testing principles. This enhances the effectiveness of testing inside SOA team members and therefore contributes to improve productivity. The function of this document is to present summary of the testing in addition techniques.

* Regression testing: is worn to pass under the duplication of the previous flourishing tests to guarantee that the changes completed in the software cover not introduced original bug's effects.

In modern years the word grey box has draw closer into ordinary usage. The representative grey box tester is allowed to locate up or control the testing atmosphere, similar to a database, and might observe the state of the product after his actions, approximately performing the SQL query on the database. It is roughly used completely of client-server or others who utilize database as a information, excluding might be apply to tester who has to direct XML files or design files openly. It is capable for testers who recognize the inside work in the algorithm of the software over test and might write tests particularly for a expected result. For an example we test a data warehouse achievement involve to load the target database with the required information and also to verify the correctness of data and loading the data in the given table.

Implementation is the process of converting an original system proposed interested in equipped one. There are totally three types implementation:

Ø Implementation of a computer system to swap a manual system. The troubles encounter are converting files, training users, and verifying printouts for integrity.

Ø Implementation of a original computer system to swap an existing one. This is frequently a complicated conversion. If not appropriately considered present be able to several troubles.

Ø Implementation of a customized function to swap an accessible one with the same computer. This category of conversion is fairly simple to handle, provided there are no main changes in the files.

The achievement in generic tools project is completed in every part of module. In the first level of the module user identification is finished. In this first module each and every user is recognized whether they are genuine or not, and mainly prohibited use of any figure is firmly avoided.

In the following table creation module the tables are formed with user individual fields and the user might create a lot of tables at a time. They can specify conditions, constraints in the creation of the table. It also maintains the user requirements during the project.

* In Updating module client might update, delete or Insert the original record addicted to the database. This is extremely significant module in Generic code project. User has to identify the field value in the appearance then the Generic tools repeatedly give complete filed standards for that meticulous record.

In coverage component consumer is able to obtain the reports starting the database in 2Dimentional or 3Dimensional observation. User has to pick the table and identify the circumstance at that time the report motivation be generated designed for the user.

By mixing the two cryptography's i.e. classical and quantum we will have some advantages and to exhibit these advantages this entire project projected two types of quantum key distribution protocols. In classical cryptography inactive threats and rerun threats cannot be defend easily using three party key sharing protocols. But where as in quantum cryptography such threats can easily be defended. The proposed system will be able to accomplish the client authentication and the key confirmation and it is also maintain the secret key among each client and the trusted centre for a long duration of time. Where as in the existing system we cannot maintain the secret key to be long last between each client and the trusted centre. Normally other protocols will have more communication rounds but the quantum key distribution protocols will have less communication rounds. Even though the quantum channel needs are very expensive now but it will not be expensive in the future. In the random oracle model these proposed quantum key distribution protocols publicized as protected. In designing process of the quantum key distribution protocols we mixture the quantum cryptography's advantages with classical cryptography's advantages and this will give a new direction in this design phase.

9. Conclusion:

For a broad and self propelled multicast system the proposed system is efficient and authenticated. This is located on a bilinear map. By examine the existing system we exploit an identity tree to succeed the confirmation of group member. A bigger size group is split into many small scale