Classes Of Intelligent Agents Computer Science Essay

Published: November 9, 2015 Words: 8376

In artificial intelligence, an intelligent agent is an autonomous entity which observes through sensors and acts upon an environment using actuators it is an agent and directs its activity towards achieving goals (i.e. it is rational).[1] Intelligent agents may also learn or use knowledge to achieve their goals. They may be very simple or very complex: a reflex machine such as a thermostat is an intelligent agent,[2] as is a human being, as is a community of human beings working together towards a goal.

http://upload.wikimedia.org/wikipedia/commons/thumb/3/3f/IntelligentAgent-SimpleReflex.png/408px-IntelligentAgent-SimpleReflex.png

http://bits.wikimedia.org/static-1.21wmf7/skins/common/images/magnify-clip.png

Simple reflex agent

Intelligent agents are often described schematically as an abstract functional system similar to a computer program. For this reason, intelligent agents are sometimes called abstract intelligent agents (AIA)[citation needed] to distinguish them from their real world implementations as computer systems, biological systems, or organizations. Some definitions of intelligent agents emphasize their autonomy, and so prefer the term autonomous intelligent agents. Still others (notably Russell & Norvig (2003)) considered goal-directed behavior as the essence of intelligence and so prefer a term borrowed from economics, "rational agent".

Intelligent agents in artificial intelligence are closely related to agents in economics, and versions of the intelligent agent paradigm are studied in cognitive science, ethics, the philosophy of practical reason, as well as in manyinterdisciplinary socio-cognitive modeling and computer social simulations.

Intelligent agents are also closely related to software agents (an autonomous computer program that carries out tasks on behalf of users). In computer science, the term intelligent agent may be used to refer to a software agent that has some intelligence, regardless if it is not a rational agent by Russell and Norvig's definition. For example, autonomous programs used for operator assistance or data mining (sometimes referred to as bots) are also called "intelligent agents".

Classes of intelligent agents

Russell & Norvig (2003) group agents into five classes based on their degree of perceived intelligence and capability:[7]

simple reflex agents

model-based reflex agents

goal-based agents

utility-based agents

learning agents

Simple reflex agents

Simple reflex agents act only on the basis of the current percept, ignoring the rest of the percept history. The agent function is based on the condition-action rule: if condition then action.

This agent function only succeeds when the environment is fully observable. Some reflex agents can also contain information on their current state which allows them to disregard conditions whose actuators are already triggered.

Infinite loops are often unavoidable for simple reflex agents operating in partially observable environments. Note: If the agent can randomize its actions, it may be possible to escape from infinite loops.

Model-based reflex agents

A model-based agent can handle a partially observable environment. Its current state is stored inside the agent maintaining some kind of structure which describes the part of the world which cannot be seen. This knowledge about "how the world works" is called a model of the world, hence the name "model-based agent".

A model-based reflex agent should maintain some sort of internal model that depends on the percept history and thereby reflects at least some of the unobserved aspects of the current state. It then chooses an action in the same way as the reflex agent.

Goal-based agents

Goal-based agents further expand on the capabilities of the model-based agents, by using "goal" information. Goal information describes situations that are desirable. This allows the agent a way to choose among multiple possibilities, selecting the one which reaches a goal state. Search and planning are the subfields of artificial intelligence devoted to finding action sequences that achieve the agent's goals.

In some instances the goal-based agent appears to be less efficient; it is more flexible because the knowledge that supports its decisions is represented explicitly and can be modified.

Utility-based agents

Goal-based agents only distinguish between goal states and non-goal states. It is possible to define a measure of how desirable a particular state is. This measure can be obtained through the use of a utility function which maps a state to a measure of the utility of the state. A more general performance measure should allow a comparison of different world states according to exactly how happy they would make the agent. The term utility, can be used to describe how "happy" the agent is.

A rational utility-based agent chooses the action that maximizes the expected utility of the action outcomes- that is, the agent expects to derive, on average, given the probabilities and utilities of each outcome. A utility-based agent has to model and keep track of its environment, tasks that have involved a great deal of research on perception, representation, reasoning, and learning.

Learning agents

Learning has an advantage that it allows the agents to initially operate in unknown environments and to become more competent than its initial knowledge alone might allow. The most important distinction is between the "learning element", which is responsible for making improvements, and the "performance element", which is responsible for selecting external actions.

The learning element uses feedback from the "critic" on how the agent is doing and determines how the performance element should be modified to do better in the future. The performance element is what we have previously considered to be the entire agent: it takes in percepts and decides on actions.

The last component of the learning agent is the "problem generator". It is responsible for suggesting actions that will lead to new and informative experiences.

CHARACTERISTICS OF INTELLIGENT AGENTS

Accommodate new problem solving rules incrementally

Adapt online and in real time

Be able to analyze itself in terms of behavior, error and success.

Learn and improve through interaction with the environment

Learn quickly from large amounts of data

Have memory-based exemplar storage and retrieval capacities

Have parameters to represent short and long term memory

Tools & Languages used to implement Intelligent Agent

There are many tools and languages used to implement intelligent agent and here are some of the tools and languages listed below:

Aglet, which is programming code that, can be transported along with state information. Aglets are Java objects that can move from one host on the Internet to another. That is, an Aglet that executes on one host can suddenly halt execution, dispatch i tself to a remote host, and resume execution there. When the Aglet moves, it takes along its program code as well as its data.

Facile, which is a high-level, higher-order programming language for systems that require a combination of complex data manipulation and concurrent and distributed computing. It combines Standard ML (SML), with a model of higher-order concurrent proc esses based on CCS. Facile is being used at ECRC to develop Mobile Service Agents.

Penguin, which is a Perl 5 module that provides a set of functions to (1) send encrypted, digitally signed Perl code to a remote machine to be executed; and (2) receive code and, depending on who signed it, execute it in an arbitrarily secure, limited compartment. The combination of these functions enable direct Perl coding of algorithms to handle safe internet commerce, mobile information-gathering agents, "live content" web browser helper apps, distributed load-balanced computation, remote software update, distance machine administration, content-based information propagation, Internet-wide shared-data applications, network application builders, and etc.

Python, which is an interpreted, interactive, object-oriented programming language. It is often compared to Tcl, Perl, Scheme or Java. It's used quite a bit as an embedded or extension language in hypermedia projects, and is used quite a bit for the s orting of text processing and administrative scripting that Perl is often used for.

IMPACTS OF INTELLIGENT AGENTS

Intelligent agents are innovative technologies that offer various benefits to their end users by automating complex or repetitive tasks. There are several potential organizational and cultural impacts of this technology that need to be considered. Organizational impacts include the transformation of the entire electronic commerce sector, operational encumbrance, and security overload. Software agents are able to quickly search the Internet, identify the best offers available online, and present this information to the end users in aggregate form. Users may not need to manually browse various websites of individual merchants; they are able to locate the best deal in a matter of seconds. This increases price-based competition and transforms the entire electronic commerce sector into a uniform perfect competition market. The culturaleffects of the implementation of software agents include trust affliction, skills erosion, privacy attrition and social detachment. Some users may not feel entirely comfortable fully delegating important tasks to software applications. In order to act on a user's behalf, a software agent needs to have a complete understanding of a user's profile, including his/her personal preferences. This, in turn, may lead to unpredictable privacy issues. When users start relying on their software agents more, especially, for communication activities, they may lose contact with other human users and look at the word with the eyes of their agents.

Applications of Intelligent agent

Here are some of the examples that use intelligent agent which illustrate some of the important ways intelligent agents can help solve real problems and make today's computer system easier to use.

Customer Help Desk

Customer help desk job is to answer calls from customers and find the answer to their problems. When customers call with a problems, the help desk person manually look up answers from hardcopy manuals, but those hardcopy manuals have been replaced with s earchable CD-ROM collections, and some companies even offer searches over the Internet. Instead of hiring help desk consultants, or having the customers search through the internet for an answer, with intelligent agent, customer describe the problem and the agent automatically searches the appropriate databases (either CD-ROM, or the Internet), then presents a consolidated answer with the most likely first. This is a good example of using intelligent agent to find and filter information.

Web Browser Intelligent

A web browser intelligent, such as an IBM Web Browser Intelligent is an agent which helps you keep track of what web site you visited and customizes your view of the web by automatically keeping a bookmark list, ordered by how often and how recent you vis it the site. It allows you to search for any words you've seen in your bookmark track, and takes you back to the site allowing you to find and filter quickly. It also help you find where you were by showing you all the different track you took starting at the current page. It also let you know by notifying you when sites you like are updated, and it could also automatically download pages for you to browse offline.

Personal Shopping Assistant

IBM's Personal Shopping Assistant uses intelligent agent technology to help the Internet shopper or the Internet shop owner to find the desired item quickly without having to browse page after page of the wrong merchandise. With the Personal Shopping Ass istant, stores and merchandise are customized as the intelligent agent learned the shopper's preferences as he/she enters in any on-line mall or stores or looking at specific merchandise. It could also arrange the merchandise so that the items you like t he most are the first one you see. Finally, Personal Shopping Assistant automates your shopping experience by reminding you to shop when a birthday, an anniversaries, or item that is on sale occurred.

Applications

1.

Systems and Network Management:Systems and network management is one of the earliest application areas to be enhanced usingintelligent agent technology. The movement to client/server computing has intensified thecomplexity of systems being managed, especially in the area of LANs, and as network centriccomputing becomes more prevalent, this complexity further escalates. Users in this area(primarily operators and system administrators) need greatly simplified management, in the faceof rising complexity.Agent architectures have existed in the systems and network management area for some time, but these agents are generally "fixed function" rather than intelligent agents. However, intelligentagents can be used to enhance systems management software. For example, they can help filter and take automatic actions at a higher level of abstraction, and can even be used to detect andreact to patterns in system behaviour. Further, they can be used to manage large configurationsdynamically;

INTELLIGENT AGENT

2.Mobile Access / Management:As computing becomes more pervasive and network centric computing shifts the focus from thedesktop to the network, users want to be more mobile. Not only do they want to access network resources from any location, they want to access those resources despite bandwidth limitations[1] of mobile technology such as wireless communication, and despite network volatility.Intelligent agents which (in this case) reside in the network rather than on the users' personalcomputers, can address these needs by persistently carrying out user requests despite network disturbances. In addition, agents can process data at its source and ship only compressed answersto the user, rather than overwhelming the network with large amounts of unprocessed data;3.Mail and Messaging:Messaging software (such a software for e-mail) has existed for some time, and is also an areawhere intelligent agent function is currently being used. Users today want the ability toautomatically prioritise and organise their e-mail, and in the future, they would like to do evenmore automatically, such as addressing mail by organisational function rather than by person.Intelligent agents can facilitate all these functions by allowing mail handling rules to be specifiedahead of time, and letting intelligent agents operate on behalf of the user according to those rules.Usually it is also possible (or at least it will be) to have agents deduce these rules by observing auser's behaviour and trying to find patterns in it;4.Information Access and Management:Information access and management is an area of great activity, given the rise in popularity of the Internet and the explosion of data available to users. It is the application area that this thesiswill mainly focus on.Here, intelligent agents are helping users not only with search and filtering, but also withcategorisation, prioritisation, selective dissemination, annotation, and (collaborative) sharing of information and documents;

5.Collaboration

Collaboration is a fast-growing area in which users work together on shared documents, using personal video-conferencing, or sharing additional resources through the network. One commondenominator is shared resources; another is teamwork. Both of these are driven and supported bythe move to network centric computing. Not only do users in this area need an infrastructure that will allow robust, scaleable sharing of data and computing resources, they also need other functions to help them actually build andmanage collaborative teams of people, and manage their work products. One of the most popular and most heard-of examples of such an application is the groupware packet called Lotus Notes;6.Workflow and Administrative Management: [2]Administrative management includes both workflow management and areas such ascomputer/telephony integration, where processes are defined and then automated. In these areas,users need not only to make processes more efficient, but also to reduce the cost of humanagents. Much as in the messaging area, intelligent agents can be used to ascertain, then automateuser wishes or business processes;7.Electronic Commerce:Electronic commerce is a growing area fuelled by the popularity of the Internet. Buyers need tofind sellers of products and services, they need to find product information (including technicalspecifications, viable configurations, etc.) that solve their problem, and they need to obtainexpert advice both prior to the purchase and for service and support afterward. Sellers need tofind buyers and they need to provide expert advice about their product or service as well ascustomer service and support. Both buyers and sellers need to automate handling of their "electronic financial affairs".Intelligent agents can assist in electronic commerce in a number of ways. Agents can "goshopping" for a user, taking specifications and returning with recommendations of purchaseswhich meet those specifications. They can act as "salespeople" for sellers by providing productor service sales advice, and they can help troubleshoot customer problems;

8.Adaptive User Interfaces:Although the user interface was transformed by the advent of graphical user interfaces (GUIs),for many, computers remain difficult to learn and use. As capabilities and applications of computers improve, the user interface needs to accommodate the increase in complexity. As user populations grow and diversify, computer interfaces need to learn user habits and preferencesand adapt to individuals.Intelligent agents (called interface agents [3]) can help with both these problems. Intelligentagent technology allows systems to monitor the user's actions, develop models of user abilities,and automatically help out when problems arise. When combined with speech technology,intelligent agents enable computer interfaces to become more human or more "social" wheninteracting with human users

Currently, agents are the focus of intense interest onthe part of many sub-fields of computer science andartificial intelligence. Agents are being used in anincreasingly wide variety of applications[5], ranging fromcomparatively small systems such as email filters tolarge, open, complex, mission critical systems such as airtraffic control. At first sight, it may appear that suchextremely different types of system can have little incommon. And yet this is not the case: in both, the keyabstraction used is that of anagent.

5.1 Industrial Applications

Industrial applications of agent technology wereamong the first to be developed: as early as 1987,Parunak reports experience with applying the contract nettask allocation protocol in a manufacturing environment(see below). Today, agents are being applied in a widerange of industrial applications

•

process control

•

manufacturing

•

air traffic control

5.2 Commercial Applications

As the richness and diversity of information availableto us in our everyday lives has grown, so the need tomanage this information has grown.

•

Information Management

ØInformation filtering.

ØInformation gathering.

•

Electronic Commerce

Currently, commerce[6] is almost entirely driven byhuman interactions; humans decide when to buy goods,how much they are willing to pay, and so on. But inprinciple, there is no reason whysome commerce cannotbeautomated. By this, we mean that some commercialdecision making can be placed in the hands of agents.

•

Business Process Management

Company managers make informed decisions based on acombination of judgment and information from manydepartments. Ideally, all relevant information should bebrought together before judgment is exercised. Howeverobtaining pertinent, consistent and up-to-date informationacross a large company is a complex and time consumingprocess. For this reason, organizations have sought todevelop a number of IT systems to assist with variousaspects of the management of their business processes.

5.3 Medical Applications

Medical informatics is a major growth area incomputer science: new applications are being found forcomputers every day in the health industry. It is notsurprising, therefore, that agents should be applied in thisdomain. Two of the earliest applications are in the areasof health care and patient monitoring.

•

Patient Monitoring

•

Health Care

5.4 Entertainment

The leisure industry is often not taken seriously by thecomputer science community. Leisure applications arefrequently seen as somehow peripheral to the 'serious'applications of computers. And yet leisure applicationssuch as computer games can be extremely lucrative -consider the number of copies of 'Quake' sold since itsrelease in 1996. Agents have an obvious role in computergames, interactive theater, and related virtual realityapplications: such systems tend to be full of semi-autonomous animated characters, which can naturally beimplemented as agents.

•

Games

•

Interactive Theater and Cinema

Limitations of Intelligent

Agents

INTELLIGENT AGENT

As with any technology-based solution, a number of limitations and concerns exist regarding theusage of intelligent agents. [5]

No overall system controller

:intelligent agents may not be appropriate where there are global constraints that need to beenforced, due to the fact that each agent is effectively acting independently of any centralcontroller. Applications requiring real-time responses are also inappropriate.

No global perspective

:an agent can only make decisions based on locally accumulated knowledge, therefore an agentmight be missing the "big picture" of a problem domain.

Trusting delegation

:a user of an intelligent agent is effectively giving responsibility of data acquisition and decisionmaking to a piece of computer code, therefore they must be sure that they can trust the agent tocarry out the delegated task with integrity.While some of these issues are technology related, others are side effects of the concept of intelligent agents itself. We cannot design an agent to be autonomous and capable of reaching itsown decisions, only to decide that we now want to exert a central control over these agents,which is defeating the purpose of the concept

Common Attributes of Intelligent Agents

While no definitive definition of what makes a computer program an intelligent agent currentlyexists, researchers have reached some consensus on the common attributes of intelligent agents:

Autonomy

: an intelligent agent must be capable of working without human supervision.

Self-learning

: an intelligent agent should be capable of changing their behaviour according totheir accumulated knowledge.

Proactive

: an intelligent agent should be capable of making decisions based on its own initiative.

Communication

: an intelligent agent needs to be able to communicate with other systems andagents, while also communicating with the end user in natural language.

Co-operation

: some of the more advanced intelligent agents should be able to act in unison withother agents to carry out complex tasks.

Mobility

: an intelligent agent will need to be mobile, to enable it to travel throughout computer systems in order to accumulate knowledge and carry out tasks.

Goal Driven

: all intelligent agents must have a goal, a user-defined purpose, and then act inaccordance with that purpose. Not all agents must possess every one of these attributes to be considered valid. For example,each agent is likely to have different knowledge, capabilities, reliability, resources andresponsibilities that will all have a bearing on the design of the agent

Some generic difficulties with designing intelligent agents

•

Inference problem.The environmentdynamicsand the mechanismbehind therewardsignal are (partially)unknown. The policies needto be able to infer from the information the agent has gathered frominteraction with the system, "good control actions".

•

Computational complexity.The policy must be able to process thehistory of the observation within limited amount of computing timesand memory.

•

Tradeoff between exploration and exploitation.

âˆ-

To obtain a lot of reward, a reinforcement learning agent must prefer actions that ithas tried in the past and found to be effective in producing reward.But to discover such actions, it has to try actions that it has notselected before.

âˆ-

May be seen as a subproblem of the general inference problem. This problem isoften referred to in the "classical control theory" as the dual control problem.

The agent has to exploit what it already knows in order to obtainreward, but it also has to explore in order to make better actionselections in the future. The dilemma is that neither exploration norexploitation can be pursued exclusively without failing at the task.The agent must try a variety of actions and progressively favor thosethat appear to be best. On a stochastic task, each action must betried many times to gain a reliable estimate its expected reward.

•

Exploring safely the environment.During an exploration phase(more generally, any phase of the agent's interaction with itsenvironment), the agent must avoid reaching unacceptable states(e.g., states that may for example endanger its own integrity). Byassociating rewards of

−∞

to those states, exploring safely can beassimilated to a problem of exploration-exploitation.

The area of Intelligent Interfaces is one of the most heterogeneous research subjects dealing with computers that exist. In this area, people from vastly different disciplines and research areas within disciplines meet, debate and collaborate. The term is so wide that people will shrink from it in practice - survey articles have been written about intelligent tutoring, adaptive interfaces, explanations or multimodal dialogue, but no survey article tries to address the whole area of intelligent interfaces. Even though all of these areas can claim to develop intelligent interfaces, none of them address this aspect specifically.

If the work in this area is so widespread and diverse, one may ask whether there is any reason at all to give it a specific name. Wouldn't it be better to avoid the notion of intelligent interfaces altogether, and continue to investigate these areas in parallel, with their different focuses?

I believe that there is an added value in the notion of intelligent interfaces, in that it captures a set of problems and ideas that are shared between all these more specific research areas. Firstly, the term provides a common framework of reference for a large group of research directions, but it is also defines a set of research issues that are worth pursuing in their own respect, without being artificially constrained by an application area or a specific technical solution. The purpose of this paper is to scope this research area and highlight its specific research issues.

What is not an intelligent interface?

To understand the notion of intelligent interfaces, we can start by a discussion of what cannot be seen as a definition of intelligent interfaces.

Firstly, we can note two things: An "intelligent system" does not necessarily have an intelligent interface, and neither is a well-designed interface necessarily intelligent.

Why is an "intelligent system" not an intelligent interface? The reason is that the intelligence of an "intelligent system" does not necessarily manifest itself in a user interface. The term "intelligent system" is as difficult to define as the term "intelligent interface", but we can consider the more limited field of knowledge-based systems, which definitely constitute a kind of intelligent systems. Knowledge-based systems are constructed to reason about and act upon a vast source of expertise in some limited field of application. The system may take its input from any sources such as human users or automatic sensors, and the output may equally well be actions in an automatic control loop or advice to a human user. The first generation expert systems were characterised by a very mechanical and system-controlled dialogue [Berry and D. E. Broadbent 1986]. Developing intelligent interfaces to knowledge-based systems is by no means an easy task, and can be considered a research area of its own. A specific issue here is the construction of explanations, that motivate the system's advice to the user [Southwick 1989].

Next, we must understand why any "good" interface cannot be considered intelligent. There exist today several approaches to the development of easy-to-use and effective interfaces, documented by guidelines or interface standards [Smith and Mosier 1986, Nielsen 1989]. Why, then, will we not call these interfaces intelligent? The answer is far from straightforward, but we can note that such standards and guidelines often impose arbitrary restrictions in the behaviour of the interface. The reason for these restrictions is mainly that the interface should be easy to learn. In design guidelines for spoken menus, it may be stated that a menu should not be more than three items long, lest users will forget some of the options. In the interface standard for the Macintosh, there is a specific set of menus that always should be included in an application, and some of these menus consist of standard menu entries. It also contains some conventions, such as shadowing of disabled options. There is nothing wrong with such guidelines or conventions, but they do not always lead to optional behaviour. Some users may be able to listen to very long menus, in particular users well aquatinted with the application. The standard entries of the menu bar can become very awkward for novel applications, such as virtual reality environments.

The main issue in human-machine interaction is obtain a "collaboration situation" between a human user and a computer system. The system must be attuned to the user, and the user to the system. "Good" conventions and guidelines shift the entire burden of adaptation to the user, and the design restrictions that they impose are geared towards easing this task for the user.

Both "intelligent systems" and "good interfaces" are thus too broad definitions: they encompass systems that we do not want to consider as intelligent interfaces. But there are also two possible definitions suggested in literature that I view as being too narrow: Systems that mimic human dialogue, and adaptive interfaces.

Most researchers would agree that a system that can maintain a human dialogue would be considered intelligent (remember the Turing test?). The problem is that there are a lot of interfaces that we would consider intelligent, that do not look "human" in any sense at all. An example is the PUSH interface [Höök E.A. 1996, Höök 1997], which presents hypertext in a manner that is adapted to the user's current task. The system is controlled mainly through direct manipulation, but the output consists of a text where certain pieces of the text are "hidden" from view, to give a comprehensive overview of the pieces of text that are most relevant to the user in his or her current task. This very passive form of user adaptation does not in any way mimic human behaviour, but is constructed to be a natural extension of the hypertext view of information.

The view of intelligent interfaces as mimicking human behaviour is not only too restrictive, it may even be considered harmful to the research field as such. The problem is that it may put emphasis on characteristics of human communication, that may be peripheral and of little use in computer communication. A specific issue is the usage of natural language in human - machine interaction. An inherent quality of human language is that it is ambiguous. Words and sentences mean different things in different situations, and the same sentence may convey messages at several different levels of interpretation. This may be effective in human - human communication, but it requires a level of interpretation and initiative from both partners, that users may not want or expect from a computer. Here, many of the principles of standard interface guidelines apply: a computer interface should be transparent and predictable, to allow users to understand it and learn to use it. Computers and humans are also good at different things, and a human - computer dialogue can be designed to make the most out of the different capabilities. User interfaces can for example use the capacities of computers to store vast information, to help the user maintain a memory of previous interactions, and to present information in multiple modalities to enhance the presentation.

Finally, an intelligent interface does not necessarily maintain a model of the user and adapts to this model [Wahlster and Kobsa 1988]. This is a possible definition of intelligent interfaces that I will avoid, because it imposes an unnatural technical constraint. Consider for example the case when we aim to develop an intelligent interface, but discover during design that it suffices to maintain several input and output mechanisms in parallel. For example, some users may prefer to input queries through a query language and some through point-and-click. We can then construct an interface which always allows both input modes, or one which only maintains one input mode, and lets the user choose which one. In the second case, the system maintains a very simple model of the user, consisting of his or her preferred input mode. Obviously, one would either like to call both interfaces intelligent, or both unintelligent, but there is no reason why one would be intelligent and the other not.

Scope of intelligent interfaces

Typically, we require of an intelligent interface that it should employ some kind of intelligent technique. What, exactly, counts as an intelligent technique will vary over time, but the following list is a fairly complete list of the kinds of techniques that today are being employed in intelligent interfaces:

User Adaptivity: Techniques that allow the user - system interaction to be adapted to different users and different usage situations.

User Modelling: Techniques that allow a system to maintain knowledge about a user.

Natural Language Technology: Techniques that allow a system to interpret or generate natural language utterances, in text or in speech,

Dialogue Modelling: Techniques that allow a system to maintain a natural language dialogue with a user, possible in combination with other interaction means (multimodal dialogue),

Explanation Generation: Techniques that allow a system to explain its results to a user.

But providing such a list of technologies does not capture the essential feature of the intelligent interface research area: an intelligent interface must utilise technology to make an improvement: the resulting interface should be better than any other solution, not just different and technically more advanced.

One way to understand the research area better, is to compare it to the research goals outlined by Russel and Wefald, in their definition of intelligent agents [Russel and Wefald 1991]. Firstly, Russel and Wefald defines an ideal rational agent as an agent that always does the right thing. Obviously, the ideal rational agent does not exist - even if we could define an algorithm for always computing the ideal response, it would take infinite computational power to produce the ideal response before it becomes obsolete. So Russel and Wefald defines an intelligent agent as an agent that has some limitations in its reasoning power, but that always does the right thing within these limitations. The limitations of an agent are essentially given by its architecture, so that certain results take very long time to produce, and may for this reason become sub optimal in a changing world.

Following Russel and Wefald, we could define intelligent interfaces the same way: an ideal interface is simply an interface that always gives the absolutely optimal response, and an intelligent interface is one that has limited capabilities, but give the optimal response within these limitations. But for interfaces, the limitations are not restricted to the internal architecture of the system, but lie foremost in its abilities to interact. For example, an optimal speech interface is something entirely different from an optimal VR interface.

For Russel and Wefald's definition of rational agency, this definition allows a very nice split into two research issues: the issue of what is the "right" action, and the issue of how to value a degradation in result against a time delay. This division is not really possible when we consider interfaces. The reason is that there is no clear 'degradation curve': given a certain set of restrictions in reasoning power or available interaction modalities, the optimal interface behaviour may be completely different from one under other constraints. The research area of intelligent interfaces comprise two research issues that are dual and complementary: we must seek to create an optimal design of an interface given a particular model of the limitations in reasoning power and interaction modalities, and conversely, the quest for a novel and better interface design may require an extension of the reasoning power and presentation means of an interface. We can define the intelligent research area based on this double aim:

The research area of Intelligent Interfaces combines design principles and technology advancements for effective human-computer interaction, and research on intelligent interfaces aim to extend the boundaries of both.

If we use this as our definition of the research area of intelligent interfaces, we find a number of characteristic features of a research project in this area.

The first issue is that the area is inherently cross-disciplinary. A normal, "good" interface cannot be called intelligent, if it does not involve some technology that reasons about "doing the right thing". Similarly, a novel mechanism for dialogue management or presentation generation does not constitute an intelligent interface, unless it has been combined with some principles for what constitutes a good interface design given this novel reasoning mechanism.

A second issue is that the area of intelligent interfaces is concerned with the development of systems that really work. For example, an abstract model of human collaboration may be useful for sociologists, but if it cannot be put to use in the design of an interface or the development of a dialogue manager, it falls outside the area of intelligent interfaces. Similarly, several models of inference used in artificial intelligence assume infinite rationality; this is useful only as a (bad) approximation of the reasoning capacities of a computer system, and will not do as an approximation of human reasoning in a user model.

Finally, the research area of intelligent interfaces is neither solely application-driven or technology-driven, but both. New interface designs may be developed to accommodate new technological developments, but research on interface technology can be motivated by novel application areas and interface designs.

State of the Art in Intelligent Interfaces

So how do people actually go about doing research on intelligent interfaces? As mentioned in the beginning, the research area is to large to be addressed in a single, ambitious project or even in a research programme. Instead, researchers will typically focus on developing intelligent interfaces for a particular application or application area. The rest of this paper is devoted to an extremely brief run-through of state of the art in intelligent interfaces. I will first sketch the main application areas for intelligent interfaces, and then go through some techniques and design principles for intelligent interfaces.

Application areas for Intelligent Interfaces

There exist a lot of applications that work well without adding any kind of system "intelligence" - applications where the computer is a mere tool, for a user that is well aware and capable of performing a specific task. We can compare this usage of the computer with the usage of a hammer: the hammer need not be intelligent, it suffices that it can be used by a user who can handle a hammer. Tools, typically, can be used in several ways, even to things that the original inventor did not think about; this requires a flexible and robust design but not any intelligence or adaptivity built into the tool.

The main application areas for intelligent interfaces are thus such where the knowledge about how to solve a task partially resides with the computer system. Since the user does not know exactly what should be done, he or she cannot manipulate the computer as a tool, but must ask the system to do something for him or her. This request may be incomplete, vague or even incorrect given the user's real needs.

Some typical application areas that can be characterised this way are Intelligent tutoring, intelligent help and information filtering.

Intelligent Tutoring. A "tutor" is a program that aims to give a personalised "education" to a user in a specific domain of knowledge [Shute and Psotka 1994]. The tutor program may need to infer the user's understanding of the domain through analysing the user's performance on test problems. The advice can be given by actively intervening, and suggesting alternative courses of actions, or passively, by answering explicit user's queries. In both cases, the answers can be tailored to what the system perceives as the user's needs and misunderstandings. Passive tutoring is often done in the style of "critiquing", where the user first suggests a full solution and the system then judges this solution, points out errors and suggests alternative solutions.

Intelligent Help. A "help" system aids a user in performing a specific task [Breuker 1990]. Help is very similar to tutoring, but the main objective for a help system is to get something done, and not to make the learn something. Another difference is that many tutoring systems will lay out specific tasks for the user to do, in order to diagnose his or her misconceptions. A help system must act upon whatever information it can gather from the user's own choice of interactions with the system. A help system can either give help about the functionality of a computer program, or about some computer-independent task (repairing a car, for example). As with tutoring, help can be active or passive.

Information filtering. In open information sources such as the Internet, it is comparatively "cheap" to distribute information to a very large group of recipients. For recipients, this means that they are flooded with large masses of information, and find it hard to extract the information that is really relevant or interesting to them. Users need help in selecting the information that is relevant to them, but the problem is that they do not know what is out there. Information filtering techniques, and information retrieval in general, aim to find structure in the available information that can be used to aid users in navigating the information space and selecting the information that is relevant to themselves. The task is called "filtering" when the information space is rapidly changing. Information filtering tools may rely on text or image processing, but may also log the reading patterns of groups of users, to determine what kind of users are interested in a certain piece of information.

Tools and Techniques for Intelligent Interfaces

Most computer tools and techniques that are used in intelligent interfaces stem from the artificial intelligence field. There are two main areas that come into play: user modelling and natural language dialogue.

The term "User Modelling" is used in two different meanings. In software design methodologies, it is sometimes used to denote the analysis of the prospective users of a computer system to be developed. In the research area of intelligent interfaces, it is used to denote a model of the user that the system maintains, and adapts its behaviour to. This is sometimes also called 'system user modelling'. In some literature of user modelling for intelligent interfaces, it is also required that the model is explicit, so that it can be easily inspected and modified [Wahlster and Kobsa 1988]. In this view, a bunch of switches in the program that determine what certain inputs our outputs will look like, do not constitute a user model. This is a somewhat awkward distinction, since it may be possible to maintain a very explicit model of the user during development, that decide which switches are needed and what effects they should have, but that do not motivate an explicit user model in the actual program. In the seminar, we will assume that any program that adapts its behaviour to some characteristics of the user, maintains a user model.

A program that maintains a user model may be adaptable or self-adaptive. An adaptable program lets the user select how the system should adapt, and a self-adaptive adapts autonomously, by deducing the user's needs from his or her interactions with the system. The distinction can be made more fine-grained; Malinowski et al [Malinowski E.A. 1992] distinguish between several levels of adaptivity, depending on who takes initiative to the adaptation, who proposes the adaptation, who decides upon it, and who carries it out. For example, a system may detect that a user would do better with a slightly different format of menus. It then takes the initiative, and suggest that this modification of the interaction style is done. The user can accept the modification, or reject it, and if the user accepts it, the system moves over to the new menu style. In this example, the initiative and the suggestion both comes from the system, and the system also performs the adaptation. The control still resides with the user, since the user can accept or reject the proposed adaptation.

Research on natural language dialogue is directly inspired by the thought of getting a computer to carry out a human-like dialogue. Since people are able to interact with each other in natural language, it should be natural and easy to interact with a computer in the same manner. The research has many facets, ranging from the literal interpretation of natural language sentences to recognising the focus and topic shifts of natural dialogue [Grosz and Sidner 1986]. Here we also find such research on text processing that is necessary to enable advanced information filtering.

General language capacities for computers have been a vision since the sixties. Unfortunately, while many others of the AI visions of the sixties have come true, true natural language remains a vision. In the mean time, several other effective means of interacting with computers have become a reality, such as direct manipulation and restricted speech. An important area of research for intelligent interfaces is to integrate several ways of interaction in a multimodal human - computer interaction [Bretan 1995]. In this view of interaction, each combination of language and media constitute a "mode". Language can for example be communicated through speech or text - these constitute two different modalities. Similarly, selection by clicking on an icon or through a menu choice constitute different modalities. Different modalities are good at different things [Bretan 1995] - speech is for useful for input when your hands are used for something else (placed on a steering wheel, for example), and language is a useful input means when one needs to refer to something that is not currently visible, and cannot be pointed at. The task for multimodal dialogue management is to integrate and combine several modalities for interaction into a seamless human - computer dialogue. In multimodal interaction, natural language is a central and important ingredient, but it is not a target goal of its own, and deficiencies in language understanding can be compensated by the ability to interact using other modalities such as direct manipulation.

Design Considerations for Intelligent Interfaces

The roots for research on interface design for intelligent interfaces lie mainly in cognitive psychology -- the theory of human thought. Intelligent interfaces are intended to be adapted to the user's way of thinking, and to some extent understand how the user thinks.

These are very ambitious goals. Some of the early models of human cognition in interacting with computer interfaces aimed to be analytical in this sense. GOMS [Kieras 1988] for example, can be used to estimate the cognitive load on users in routine interface tasks. But these models can be used only for an analysis at a rather low level of detail, and provide little insight in what is an appropriate design of an interface. The alternative has become to apply methods and principles that have been developed for traditional interfaces, but extend and modify them to be applicable to the new functionalities and interaction principles found in intelligent interfaces [Ereback and Höök 1994]. The prevailing development strategy is that of iterative and user-oriented design, where the interface is repeatedly tested with users to refine the design, and see which adaptations work and which do not work.

One such interaction principle is the principle of transparency and control. In general, an interface should allow the user to inspect the functionality of a system, to be able to control and correct it, if it goes off target and produces an unwanted result [du Boulay E.A. 1981]. If an interface is self-adaptive, the same applies. A user must be able to inspect why a certain adaptation was generated, and correct the behaviour if the result was not what the user wanted [Höök E.A. 1996]. Inspection is also important to allow the user to trust a system: an expert system must for example be able to produce an explanation of why it suggested a certain action. Else, a user may ignore the system's advice because it mistrusts its competence.

We previously noted that intelligent interfaces may provide both active and passive adaptations to the user's needs. These may require different interaction metaphors to be understood by the user. If the prevailing interaction metaphor is that of direct manipulation, the system must behave rather passively, and let the user maintain the initiative and control of the interaction. The intelligence of the system may show only in the set of options that the system suggests, for example. An example of a very passive intelligent interface are the adaptive prompts suggested in [Kühme EA 1993]. Their system presents a completely standard direct manipulation interface, but in addition to the normal interface, the system maintains a small menu of the three or four most useful "next actions". This menu is continuously updated, and may provide shortcuts to the user's next action. On the other hand, if the system is to take a lot of initiative, make active suggestions of may interpret the user's queries or commands differently in different situations, this can be conveyed through an "interface agent". In this situation, the user will perceive the interface agent as a conversation partner, rather than a useful tool. This mode of interaction has sometimes been called "indirect management" [Maes 1994]

Motivation behind intelligent interfacing

Why do we need intelligent interfaces? There are a couple of good reasons for wanting to create systems with intelligent interfaces:

Applications are becoming increasingly complex; users may need some guidance on how to use a particular part of a program, especially if the parts in question are rarely used or are confusing.

Applications are managing a lot of information. Often, there is too much information to display to the user. Techniques for determining what information is the most pertinent for a particular user is important so the user isn't overwhelmed with data.

As computers become more widely used in the workplace, more non-experts are finding themselves in front of systems which they don't fully understand. The assistance offered by intelligent interfaces might alleviate some problems and misunderstandings between computers and users.

Computers are being used in an increasing number of special or extreme situations, or are being used by special users. Examples of special or extreme situations include military programs, medical software, and programs in high-stress environments (such as distaster management). Intelligent interfaces would be needed in these situations because they can better provide for the user through the use of multimodal communication and knowledge of system functionality. Special users include primarily those with conditions that would otherwise prohibit them from effectively using a computer, such as people with visual or motor afflictions. Through multimodal communication and interface adaptability, intelligent interfaces can help make computers accessible to these users.

examples of intelligent interfaces

There are quite a few programs which use intelligent interfaces. Some of the more significant of these (those systems which are in actual use, as opposed to those written for the sole purpose of demonstrating IntInt techniques) include the following:

CUBRICON. This is a system used for Air Force command and control. It incorporates speech input and output, natural-language text, graphics, and pointing gestures by the user [ref]. The objective of using IntInt techniques in this system is to "simplify operator communication with sophisticated computer systems."

SAGE. This program creates intelligent data-graphics. The intelligent interfacing is in determining the best way to display a set of data so that the data is readable and understandable.[ref]

CHORIS. This system is designed to enable a wide range of users to interact effectively with varying types of complex systems [ref]. Its main strength lies in its knowledge of these complex systems.

UCEgo, the intelligent component of UNIX Consultant. This is a natural language system which helps the user solve problems encountered while using UNIX