Security In Large Networks Using Mediator Protocols Computer Science Essay

Published: November 9, 2015 Words: 5660

The combination of 3AQKDP (implicit) and 3AQKDPMA (explicit) quantum cryptography is used to provide authenticated secure communication between sender and receiver.

In quantum cryptography, quantum key distribution protocols (QKDPs) employ quantum mechanisms to distribute session keys and public discussions to check for eavesdroppers and verify the correctness of a session key. However, public discussions require additional communication rounds between a sender and receiver. The advantage of quantum cryptography easily resists replay and passive attacks.

A 3AQKDP with implicit user authentication, which ensures that confidentiality is only possible for legitimate users and mutual authentication is achieved only after secure communication using the session key start.

In implicit quantum key distribution protocol(3AQKDP) have two phases such as setup phase and distribution phase to provide three party authentication with secure session key distribution. In this system there is no mutual understanding between sender and receiver. Both sender and receiver should communicate over trusted center.

In explicit quantum key distribution protocol (3AQKDPMA) have two phases such as setup phase and distribution phase to provide three party authentications with secure session key distribution. I have mutual understanding between sender and receiver. Both sender and receiver should communicate directly with authentication of trusted center.

Disadvantage of separate process 3AQKDP and 3AQKDPMA were provide the authentication only for message, to identify the security threads in the message. Not identify the security threads in the session key.

1. Introduction

About the Project

KEY distribution protocols are used to facilitate sharing secret session keys between users on communication networks. By using these shared session keys, secure

communication is possible on insecure public networks. However, various security problems exist in poorly designed key distribution protocols; for example, a malicious

attacker may derive the session key from the key distribution process. A legitimate participant cannot ensure that the received session key is correct or fresh and a

legitimate participant cannot confirm the identity of the other participant. Designing secure key distribution protocols in communication security is a top priority.

In some key distribution protocols, two users obtain a shared session key via a trusted center (TC). Since three parties (two users and one TC) are involved in session key

negotiations, these protocols are called three-party key distribution protocols, as in contrast with two-party protocols where only the sender and receiver are involved in session key negotiations. In classical cryptography, three-party key distribution

protocols utilize challengeresponse mechanisms or timestamps. However, challengeresponse mechanisms require at least two communication rounds between the TC and participants, and the timestamp approach needs the assumption of clock

synchronization which is not practical in distributed systems (due to the unpredictable nature of network delays and potential hostile attacks) . Furthermore, classical cryptography cannot detect the existence of passive attacks such as eavesdropping. On the contrary, a quantum channel eliminates eavesdropping, and, therefore, replay attacks. This fact can then be used to reduce the number of rounds of other protocols based on challenge-response mechanisms to a trusted center (and not only three-party authenticated key distribution protocols).

In quantum cryptography, quantum key distributionprotocols (QKDPs) employ quantum mechanisms to distribute session keys and public discussions to check for

eavesdroppers and verify the correctness of a session key. However, public discussions require additional communication rounds between a sender and receiver and cost

precious qubits. By contrast, classical cryptography provides convenient techniques that enable efficient key verification and user authentication. Previously proposed QKDPs are the theoretical design, security proof and physical implementation. Three important

theoretical designs have been proposed Bennett and Brassard employed the uncertainty of quantum measurement1 and four qubit states to distribute a session key securely between legitimate participants. Bennett utilized two nonorthogonal qubit states to establish a session key between legitimate users. Ekert presented a QKDP based on Einstein-Podolsky- Rosen (EPR) pairs, which requires quantum memories to preserve qubits of legitimate users. Although, allow legitimate participants to establish a session key

without initially sharing secret keys and do not need a TC, their security is based on the assumption of well authenticated participants. In other words, without this assumption,

these protocols can suffer man-in-the-middle attacks. Hwang et al. proposed a modified quantum cryptography protocol that requires every pair of participants to preshare a secret key (a similar idea that is this work) for measuring bases selection. However, the participants have to perform public discussions to verify session key correctness. A three-party QKDP proposed in requires that the TC and each participant preshare a sequence of EPR pairs rather than a secret key. Consequently, EPR pairs are measured and consumed, and need to be reconstructed by the TC and a participant after one QKDP execution.

Benefits of Three Party Authentications for key Distributed Protocol using Implicit and Explicit Quantum Cryptography

Advantage of combining implicit and explicit quantum cryptography is to used to verify the session key from trusted center and sender which improve key verification and secure the communication. Also identify the security threads in session key verification.

Another advantage of this project is to avoid the network noise in message transmission by identifying the size of bytes transmitted over the network from sender to receiver and remove the extra byte content received from network

2. Organization Profile

Company Profile

At Ecway Infosys solution, We go beyond providing software solutions. We work with our client's technologies and business changes that shape their competitive advantages.

Founded in 2008, Ecway Infosys solution (P) Ltd. is a software and service provider that helps organizations deploy, manage, and support their business-critical software more effectively. Utilizing a combination of proprietary software, services and specialized expertise, Ecway Infosys solution (P) Ltd. helps mid-to-large enterprises, software companies and IT service providers improve consistency, speed, and transparency with service delivery at lower costs. Ecway Infosys solution (P) Ltd. helps companies avoid many of the delays, costs and risks associated with the distribution and support of software on desktops, servers and remote devices. Our automated solutions include rapid, touch-free deployments, ongoing software upgrades, fixes and security patches, technology asset inventory and tracking, software license optimization, application self-healing and policy management. At Ecway Infosys solution, we go beyond providing software solutions. We work with our clients' technologies and business processes that shape there competitive advantages.

About The People

As a team we have the prowess to have a clear vision and realize it too. As a statistical evaluation, the team has more than 40,000 hours of expertise in providing real-time solutions in the fields of Embedded Systems, Control systems, Micro-Controllers, c Based Interfacing, Programmable Logic Controller, VLSI Design And Implementation, Networking With C, ++, java, client Server Technologies in Java,(J2EE\J2ME\J2SE\EJB),VB & VC++, Oracle and operating system concepts with LINUX.

Our Vision

"Dreaming a vision is possible and realizing it is our goal".

Our Mission

We have achieved this by creating and perfecting processes that are in par with the global standards and we deliver high quality, high value services, reliable and cost effective IT products to clients around the world.

3. System Analysis

3.1 Existing System

In classical cryptography, three-party key distribution protocols utilize challengeresponse mechanisms or timestamps to prevent replay attacks .

However, challengeresponse mechanisms require at least two communication rounds between the TC and participants, and the timestamp approach needs the assumption of clock synchronization which is not practical in distributed systems (due to the unpredictable nature of network delays and potential hostile attacks) .

Furthermore, classical cryptography cannot detect the existence of passive attacks such as eavesdropping. This fact can then be used to reduce the number ofrounds of other protocols based on challenge-response mechanisms to a trusted center (and not only three-party authenticated key distribution protocols).

3.2 Limitations of Existing System

Disadvantage of separate process 3AQKDP and 3AQKDPMA were provide the authentication only for message, to identify the security threads in the message. Not identify the security threads in the session key.

3.3 Proposed System.

In quantum cryptography, quantum key distribution protocols (QKDPs) employ quantum mechanisms to distribute session keys and public discussions to check for eavesdroppers and verify the correctness of a session key. However, public discussions require additional communication rounds between a sender and receiver and cost precious qubits. By contrast, classical cryptography provides convenient techniques that enable efficient key verification and user authentication.

There are two types of Quantum Key Distribution Protocol, they are

1. The Proposed 3AQKDP

This section describes the details of the 3AQKDP by using the notations defined in previous sections. Here, we assume that every participant shares a secret key with the TC in advance either by direct contact or by other ways.

2. The Proposed 3QKDPMA

The proposed 3QKDPMA can be divided into two phases: the Setup Phase and the Key Distribution Phase. In the Setup Phase, users preshare secret keys with the TC and agree to select polarization bases of qubits based on the preshared secret key. The Key Distribution Phase describes how Alice and Bob could share the session key with the assistance of TC and achieve the explicit user authentication.

4. Problem Formulation

This work presents combination of classical cryptography (existing) and quantum cryptography (proposed). Two three-party QKDPs, one with implicit user authentication and the other with explicit mutual authentication which is used to make authentication using quantum mechanism.

In classical cryptography provides convenient techniques that enable efficient key verification and user authentication but it is not identify eavesdropping. Here, the enhanced key distribution protocol using classical and quantum cryptography will improve the security and authentication

Software Requirement Specification

The software requirement specification is produced at the culmination of the analysis task. The function and performance allocated to software as part of system engineering are refined by establishing a complete information description as functional representation, a representation of system behavior, an indication of performance requirements and design constraints, appropriate validation criteria.

User Interface

* Swing - Swing is a set of classes that provides more powerful and flexible components that are possible with AWT. In addition to the familiar components, such as button checkboxes and labels, swing supplies several exciting additions, including tabbed panes, scroll panes, trees and tables.

* Applet - Applet is a dynamic and interactive program that can run inside a web page displayed by a java capable browser such as hot java or Netscape.

Hardware Interface

Hard disk : 40 GB

RAM : 512 MB

Processor Speed : 3.00GHz

Processor : Pentium IV Processor

Software Interface

JDK 1.5

Java Swing

MS-Access/SQL Server

Software Description

What is JAVA?

Java ha two things: a programming language and a platform.

Java is a high-level programming language that is all of the following

Simple Architecture-neutral

Object-oriented Portable Secure

Distributed High-performance

Interpreted Multithreaded

Robust Dynamic

Java is also unusual in that each Java program is both compiled and interpreted. With a compile you translate a Java program into an intermediate language called Java byte codes the platform-independent code instruction is passed and run on the computer.

Compilation happens just once; interpretation occurs each time the program is executed. The figure illustrates how this works.

Java Program

Compilers

Interpreter

My Program

You can think of Java byte codes as the machine code instructions for the Java Virtual Machine (Java VM). Every Java interpreter, whether it's a Java development tool or a Web browser that can run Java applets, is an implementation of the Java VM. The Java VM can also be implemented in hardware.

Java byte codes help make "write once, run anywhere" possible. You can compile your Java program into byte codes on my platform that has a Java compiler. The byte codes can then be run any implementation of the Java VM. For example, the same Java program can run Windows NT, Solaris, and Macintosh.

Java Platform

A platform is the hardware of software environment in which a program runs. The Java platform differs from most other platforms in that it's a software only platform that runs on the top of other, hardware-based platform. Most other platforms are described as a combination of hardware and operating system.

The Java platform has two components:

The Java Virtual Machine (Java VM)

The Java Application Programming Interface (Java API)

You've already been introduced to the Java VM. It's the base for the Java platform and is ported onto various hardware-based platforms.

The Java API is a large collection of ready-made software components that provide many useful capabilities, such as graphical user interface (GUI) widgets.

The Java API is grouped into libraries (package) of related components. The next sections, what can Java do? Highlights each area of functionally provided by the package in the Java API.

How does the Java API support all of these kinds of programs? With packages of software components that provide a wide range of functionality. The API is the API included in every full implementation of the platform.

The core API gives you the following features:

The Essentials: Objects, Strings, threads, numbers, input and output, data structures, system properties, date and time, and so on.

Applets: The set of conventions used by Java applets.

Networking: URL's TCP and UDP sockets and IP addresses.

Internationalization: Help for writing programs that can be localized for users.

Worldwide programs can automatically adapt to specific locates and be displayed in the appropriate language.

Java Program

Java API

Java Virtual Machine

Java Program

Hard Ware

API and Virtual Machine insulates the Java program from hardware dependencies. As a platform-independent environment, Java can be a bit slower than native code. However, smart compilers, well-tuned interpreters, and Just-in-time-byte-code compilers can bring Java's performance close to the native code without threatening portability.

What can Java do?

However, Java is not just for writing cut, entertaining applets for the World Wide Web (WWW). Java is a general purpose, high-level programming language and a powerful software platform. Using the fineries Java API, you can write many types of programs.

Networking

This article is about a client/server multi-threaded socket class. The thread is optional since the developer is still responsible to decide if needs it. There are other Socket classes here and other places over the Internet but none of them can provide feedback (event detection) to your application like this one does. It provides you with the following events detection: connection established, connection dropped, connection failed and data reception (including 0 byte packet).

Description

This article presents a new socket class which supports both TCP and UDP communication. But it provides some advantages compared to other classes that you may find here or on some other Socket Programming articles. First of all, this class doesn't have any limitation like the need to provide a window handle to be used. This limitation is bad if all you want is a simple console application. So this library doesn't have such a limitation. It also provides threading support automatically for you, which handles the socket connection and disconnection to a peer. It also features some options not yet found in any socket classes that I have seen so far. It supports both client and server sockets. A server socket can be referred as to a socket that can accept many connections. And a client socket is a socket that is connected to server socket. You may still use this class to communicate between two applications without establishing a connection. In the latter case, you will want to create two UDP server sockets (one for each application). This class also helps reduce coding need to create chat-like applications and IPC (Inter-Process Communication) between two or more applications (processes). Reliable communication between two peers is also supported with TCP/IP with error handling. You may want to use the smart addressing operation to control the destination of the data being transmitted (UDP only). TCP operation of this class deals only with communication between two peers.

Analysis of Network Client Server

TCP/IP stack

The TCP/IP stack is shorter than the OSI one:

TCP is a connection-oriented protocol; UDP (User Datagram Protocol) is a connectionless protocol.

IP datagram's

The IP layer provides a connectionless and unreliable delivery system. It considers each datagram independently of the others. Any association between datagram must be supplied by the higher layers. The IP layer supplies a checksum that includes its own header. The header includes the source and destination addresses. The IP layer handles routing through an Internet. It is also responsible for breaking up large datagram into smaller ones for transmission and reassembling them at the other end.

UDP

UDP is also connectionless and unreliable. What it adds to IP is a checksum for the contents of the datagram and port numbers.

TCP

TCP supplies logic to give a reliable connection-oriented protocol above IP. It provides a virtual circuit that two processes can use to communicate.

Internet addresses

In order to use a service, you must be able to find it. The Internet uses an address scheme for machines so that they can be located. The address is a 32 bit integer which gives the IP address. This encodes a network ID and more addressing. The network ID falls into various classes according to the size of the network address.

Network address

Class A uses 8 bits for the network address with 24 bits left over for other addressing. Class B uses 16 bit network addressing. Class C uses 24 bit network addressing and class D uses all 32.

Subnet address

Internally, the UNIX network is divided into sub networks. Building 11 is currently on one sub network and uses 10-bit addressing, allowing 1024 different hosts.

Host address

8 bits are finally used for host addresses within our subnet. This places a limit of 256 machines that can be on the subnet.

Port addresses

A service exists on a host, and is identified by its port. This is a 16 bit number. To send a message to a server, you send it to the port for that service of the host that it is running on. This is not location transparency! Certain of these ports are "well known".

Sockets

A socket is a data structure maintained by the system to handle network connections. A socket is created using the call socket. It returns an integer that is like a file descriptor.

ServerSocket

A ServerSocket listens for the Socket request and performs message handling functions, file sharing, database sharing functions etc.

JDBC

In an effort to set an independent database standard API for Java, Sun Microsystems developed Java Database Connectivity, or JDBC. JDBC offers a generic SQL database access mechanism that provides a consistent interface to a variety of RDBMS. This consistent interface is achieved through the use of "plug-in" database connectivity modules, or drivers. If a database vendor wishes to have JDBC support, he or she must provide the driver for each platform that the database and Java run on.

To gain a wider acceptance of JDBC, Sun based JDBC's framework on ODBC. As you discovered earlier in this chapter, ODBC has widespread support on a variety of platforms. Basing JDBC on ODBC will allow vendors to bring JDBC drivers to market much faster than developing a completely new connectivity solution.

JDBC Goals

Few software packages are designed without goals in mind. JDBC is one that, because of its many goals, drove the development of the API. These goals, in conjunction with early reviewer feedback, have finalized the JDBC class library into a solid framework for building database applications in Java.

The goals that were set for JDBC are important. They will give you some insight as to why certain classes and functionalities behave the way they do. The eight design goals for JDBC are as follows:

SQL Level API

The designers felt that their main goal was to define a SQL interface for Java. Although not the lowest database interface level possible, it is at a low enough level for higher-level tools and APIs to be created. Conversely, it is at a high enough level for application programmers to use it confidently. Attaining this goal allows for future tool vendors to "generate" JDBC code and to hide many of JDBC's complexities from the end user.

SQL Conformance

SQL syntax varies as you move from database vendor to database vendor. In an effort to support a wide variety of vendors, JDBC will allow any query statement to be passed through it to the underlying database driver. This allows the connectivity module to handle non-standard functionality in a manner that is suitable for its users.

JDBC must be implemental on top of common database interfaces

The JDBC SQL API must "sit" on top of other common SQL level APIs. This goal allows JDBC to use existing ODBC level drivers by the use of a software interface. This interface would translate JDBC calls to ODBC and vice versa.

Provide a Java interface that is consistent with the rest of the Java system

Because of Java's acceptance in the user community thus far, the designers feel that they should not stray from the current design of the core Java system.

Keep it simple

This goal probably appears in all software design goal listings. JDBC is no exception. Sun felt that the design of JDBC should be very simple, allowing for only one method of completing a task per mechanism. Allowing duplicate functionality only serves to confuse the users of the API.

Use strong, static typing wherever possible

Strong typing allows for more error checking to be done at compile time; also, less errors appear at runtime.

Keep the common cases simple

Because more often than not, the usual SQL calls used by the programmer are simple SELECT's, INSERT's, DELETE's and UPDATE's, these queries should be simple to perform with JDBC. However, more complex SQL statements should also be possible.

5. System Design

5.1 Design Overview

Sender Module

Secret key Authentication

The sender give the secret key to the trusted center, then the TC will verify the secret and authenticate to the corresponding sender and get the session key from TC or else TC not allow the user transmission

Encryption

The message is encrypted by the received session key and appends the qubit with that encrypted message, then transmit the whole information to the corresponding receiver.

Trusted Center

Secret Key Verification

Verify the secret key received from the user and authenticate the corresponding user for secure transmission.

Session Key Generation

It is a shared secret key which is used to for encryption and decryption. The size of session key is 8 bits. This session key is generated from pseudo random prime number and exponential value of random number

Qubit Generation

To get secret key and random string, then convert into hex-code and then convert it into binary, find the least bit of the two binary values and get the quantum bit of 0 and 1.

To generate the quantum key using the qubit and session key which depends on the qubit combinations, such us

If the value is 0 and 0, then 1/√2(p[0] + p[1]).

If the value is 1 and 0, then 1/√2(p[0] - p[1]).

If the value is 0 and 1, then p[0].

If the value is 1 and 1, then p[1].

Hashing

It's a technique to encrypt the session key by using the master key and store all the values to TC storage

Key Distribution

It distribute the original session key and qubit to the sender for encrypting the message. Also it distribute the key and qubit to the corresponding receiver to decrypt the received messages

Receiver Module

Secret key Authentication

It receive the encrypted message with hashed session key and qubit, then verify the qubit with TC and generate the master key and reverse the hash the session key and also reverse hash the session key from sender then compare the session key which improve the key authentication

Decryption

Then finally decrypt the message using session key and show it to the user

Cryptography

Cryptography is the process of protecting information by transforming it into an unreadable format, called cipher text. Only those who possess a secret key can decrypt the message into text. Encryption is the process of conversion of original data (called plain text) into unintelligible form by means of reversible translation ie based on translation table or algorithm, which is also called enciphering. Decryption is the process of translation of encrypted text (called cipher text) into original data (called plain text), which is also called deciphering.

Cryptography systems can be broadly classified into symmetric key systems

in which both the sender and recipient use a single key for encryption and decryption, and public key systems that use two keys, a public key known to everyone and a private key that only the recipient of messages uses. Each of this system make use of a algorithm for encryption and decryption in which sender make use a key for encryption of a plain text to cipher text and receiver make use of key used by sender to decrypt the cipher text to plain text this process is called as symmetric key crypto graphic algorithm. Example for symmetric key encryption algorithms are data encryption standard (DES) & blowfish. In public key encryption algorithm the sender encrypt the plain text by using the public key of receiver, the receiver decrypt the cipher text by using own private key. Example for public key encryption algorithms are Elliptic Curve Cryptograph (ECC) & RSA.

Cryptography plays a major role in the security aspects of multicasting. For example, consider stock data distribution group, which distributes stock information to a set of users around the world. It is obvious that only those who have subscribed to the service should get the stock data information. But the set of users is not static. New customers joining the group should receive information immediately but should not receive the information that was released prior to their joining. Similarly, if customers leave the group, they should not receive any further information.

Authentication.

Authenticity means that when a user receives a message, it is assured about the identity of the sender. The authenticity requirement can be translated in the context of secure multicast into two requirements on key and data distribution.

Key authenticity: only the center can generate a session key.

Data authenticity: the users can distinguish among the data sent by the center and the malicious data sent by an attacker.

Level1

Sec Key

Session key

Qubit Generation

Random String Gen

Session Key Generation

Hashing

Level0

Trusted center

Receiver

Secret Key

Secret Key

Session Key

Encrypted Msg by Sess Key

Level1

Key Generation

Sender

Use case Diagram:

Sender

Trusted center

Quantum Key Generation

Receiver

Class Diagram:

Sender

TCRequest ()

Upload ()

String Filename

Trusted Center

Randomnumber()

sessionkey()

String Key

Quantum Key Generation

Setup()

KeyDistribution()

String Secretkey

Receiver

TCRequest()

Download()

String Filename

Object Interaction Diagram:

Trusted center

Random number geneartion

Session Key generation

Create qubits

Sender

Quantum Keymatching

Transaction declined

Transaction allowed

Original data

No

Yes

TCRequest

Receiver

Users preshare secret keys with the TC

6. System Testing

The purpose of testing is to discover errors. Testing is the process of trying to discover every conceivable fault or weakness in a work product. It provides a way to check the functionality of components, sub assemblies, assemblies and/or a finished product It is the process of exercising software with the intent of ensuring that the Software system meets its requirements and user expectations and does not fail in an unacceptable manner. There are various types of test. Each test type addresses a specific testing requirement.

Types of Tests

6.1 Unit testing

Unit testing involves the design of test cases that validate that the internal program logic is functioning properly, and that program input produce valid outputs. All decision branches and internal code flow should be validated. It is the testing of individual software units of the application .it is done after the completion of an individual unit before integration. This is a structural testing, that relies on knowledge of its construction and is invasive. Unit tests perform basic tests at component level and test a specific business process, application, and/or system configuration. Unit tests ensure that each unique path of a business process performs accurately to the documented specifications and contains clearly defined inputs and expected results.

6.1.1 Functional test

Functional tests provide systematic demonstrations that functions tested are available as specified by the business and technical requirements, system documentation, and user manuals.

Functional testing is centered on the following items:

Valid Input : identified classes of valid input must be accepted.

Invalid Input : identified classes of invalid input must be rejected.

Functions : identified functions must be exercised.

Output : identified classes of application outputs must be exercised.

Systems/Procedures : interfacing systems or procedures must be invoked.

6.1.2 System Test

System testing ensures that the entire integrated software system meets requirements. It tests a configuration to ensure known and predictable results. An example of system testing is the configuration oriented system integration test. System testing is based on process descriptions and flows, emphasizing pre-driven process links and integration points.

6.1.3 Performance Test

The Performance test ensures that the output be produced within the time limits,

and the time taken by the system for compiling, giving response to the users and request being send to the system for to retrieve the results.

6.2 Integration Testing

Software integration testing is the incremental integration testing of two or more integrated software components on a single platform to produce failures caused by interface defects.

The task of the integration test is to check that components or software applications, e.g. components in a software system or - one step up - software applications at the company level - interact without error.

Integration testing for Database Synchronization:

Testing the links that the signup request being send by the New user to the group controller for to join in the Group.

If the login user does not have enough privileges to invoke a screen, the link should be disabled.

The Encryption and Decryption process is performed as side of the Users Requestions.

6.3 Acceptance Testing

User Acceptance Testing is a critical phase of any project and requires significant participation by the end user. It also ensures that the system meets the functional requirements.

Acceptance testing for Data Synchronization:

The Group controller's have separate roles to modify the database tables.

The Group controller's have the provision for to do Multicast operations.

The Key Generation Center is act as a Router while Encrypted message is transferred between the groups.

The Key Generation Center can only generate the keys for the groups.

7. Implementation

Quantum cryptography easily resists replay and passive attacks, whereas classical cryptography enables efficient key verification and user authentication. By

integrating the advantages of both classical and quantum cryptography, this work presents two QKDPs with the following contributions:

1. man-in-the-middle attacks can be prevented, eavesdropping

can be detected, and replay attacks can be

avoided easily;

2. user authentication and session key verification can

be accomplished in one step without public discussions

between a sender and receiver;

3. the secret key preshared by a TC and a user can be

long term (repeatedly used); and

4. the proposed schemes are first provably secure

QKDPs under the random oracle model.

In the proposed QKDPs, the TC and a participant synchronize their polarization bases according to a preshared secret key. During the session key distribution, the

preshared secret key together with a random string are used to produce another key encryption key to encipher the session key. A recipient will not receive the same polarization qubits even if an identical session key is retransmitted.

Consequently, the secrecy of the preshared secret key can be preserved and, thus, this preshared secret key can be long term and repeatedly used between the TC and participant. Due to the combined use of classical cryptographic techniques with the quantum channel, a recipient can authenticate user identity, verify the correctness and

freshness of the session key, and detect the presence of eavesdroppers. Accordingly, the proposed QKDPs require the fewest communication rounds among existing QKDPs.

The same idea can be extended to the design of other QKDPs with or without a TC.

The random oracle model is employed to show the security of the proposed protocols. The theory behind the random oracle model proof indicates that when the adversary

breaks the three-party QKDPs, then a simulator can utilize the event to break the underlying atomic primitives. Therefore, when the underlying primitives are secure, then

the proposed three-party QKDPs are also secure.

Conclusion

This study proposed two three-party QKDPs to demonstrate the advantages of combining classical cryptography with quantum cryptography. Compared with classical

three-party key distribution protocols, the proposed QKDPs easily resist replay and passive attacks. Compared with other QKDPs, the proposed schemes efficiently achieve key verification and user authentication and preserve a longterm secret key between the TC and each user. Additionally, the proposed QKDPs have fewer communication

rounds than other protocols. Although the requirement of the quantum channel can be costly in practice, it may not be costly in the future. Moreover, the proposed QKDPs have

been shown secure under the random oracle model. By combining the advantages of classical cryptography with quantum cryptography, this work presents a new direction

in designing QKDPs.

8. Conclusion

The Proposed system is an efficient, authenticated, scalable key agreement for large and dynamic multicast systems, which is based on the bilinear map. Compared with the Existing system, we use an identity tree to achieve the authentication of the group member. Further, it solve the scalability problem in multicast communications. Since a large group is divided into many small groups. Each subgroup is treated almost like a separate multicast group with its own subgroup key. All the keys used in each subgroup can be generated by a group of KGC's in parallel. The intuitively surprising aspect of this scheme is that, even the subgroup controller aborts, it does not affect the users in this subgroup. Because every user in the subgroup can act as a subgroup controller. This is a significant feature especially for the mobile and ad hoc networks. From the security analysis we can see that our scheme satisfies both forward and backward secrecy.