Greer, J. (2001). Lessons learned in deploying amulti-agent learning supportsyst

开发多智能体学习支持系统得到的教训:I-Help的经验
Jim Greer,Gordon McCalla,Julita Vassileva,Ralph Deters,Susan Bull,Lori KettelARIES Laboratory,Department of Computer Science,University of Saskatchewan,Canada.
摘要:本文着眼于基于智能体对端互助学习支持系统I-Help的几个真实大规模部署得到的教训。这些教训主要分为两类:软件工程方面的教训和用法方面的教训。
In the deployments of I-Help to date we have learned a number of important things about the technology needed to support widespread use of a distributed learning support system.

In particular accessibility,dependability,andscalability are critical needs.We have also learned a number of things about how,why,and even whether students will use a system like -IHelp.There are technicaland social dimensions to the usage issue.The paper briefly overviews I-Help,andthen describes the various deployments.The software engineering and usage lessonsare then elaborated,drawing on data gathered by I-Help itself during its variousdeployments and on questionnaires handed out to student users at the end of two ofthe deployments.These lessons are,we believe,useful not just in the I-Help context,but for any AIED researchers who plan to deploy a complex system in a real worldfor a large number of users.1.IntroductionI-Help is a peer-help system designed to assist learners as they engage in authenticproblem-solving activities.It works by locating resources(both online and human)that areparticularised to a learner's help request.The I-Help project has been ongoing for a numberof years,with descriptions of various aspects appearing in the research literature.Theresearch has explored a number of interesting AIED research issues,especially in the areasof learner modelling and agent technology.In the last few years we have moved beyondresearch prototypes and have begun to deploy various versions of I-Help in large-scaleexperiments involving hundreds and sometimes thousands of learners.This has led to awhole new set of challenges and lessons learned.The focus of this paper is on these large-scale deployments and what we have learned from them.While on the surface I-Help resembles a simple environment for sharing messages inpublic and private discussion areas with the help of a personal agent,underlying I-Helpthere is a significant and complex system.There are many personal agents thatcommunicate with each other and with application agents of various sorts;there are learnermodels that are spread across the many agents in the system;and there are inferencemechanisms to process the learner models to locate appropriate helpers.It is a huge effortto build such a complex system and at the same time make it robust,scalable,useable,andusefully intelligent and adaptive to individual learning needs.The first part of this paperexplores the software engineering lessons that we have learned through severaldeployments of I-Help.The second part explores the lessons we have learned from thesedeployments about how,why,and whether students use the system.Drawing onperformance data and post-hoc questionnaires we explore students'actual usage of thesystem and draw some preliminary conclusions about student use of I-Help.First,however,we introduce I-Help and describe the various deployments that we have carriedout.2.The I-Help SystemI-Help has two components:public discussion(I-Help Pub)and private discussion(I-Help1-on-1):Public Discussion:In I-Help Pub,learners can post questions,comments andresponses to forums.These postings are shared with their peers.Forums areclustered into groups and group memberships.A person who is a member of agroup can access the forums created for that group.I-Help Pub is usedasynchronously.Private Discussion:The second I-Help component supports one-on-one privatediscussions(or help dialogues)between a learner–the helpee,and a single peer(orexpert)–the helper.These dialogues may be synchronous or asynchronous.Thefollowing illustrates the sequence of events for a help request in I-Help 1-on-1:1.Alearner contacts their personal agent to issue a help request;2.The learner's agentnegotiates with the agents of other learners,to locate potential helpers;3.The top Nmatches are notified that there is a help request waiting;4.The first of the contactedhelpers to accept the request starts a one-on-one interaction with the helpee.Requests to other potential helpers are cancelled;5.Upon completion of theinteraction,each learner receives a brief evaluation form through which theyevaluate their partner,for student modelling purposes.Multiple fragmented student models underlie the 1-on-1 system[1].Each person"owns"a personal agent,their representative in the system,and this personal agent keeps amodel of its"owner"as a source of information as it acts on the owner's behalf.Thesemodels are used by personal agents when negotiating help sessions with other users inorder to determine the best helpee-helper matches[2].User model information is obtainedfrom the learner(through stated availability and self-assessment of knowledge of differenttopics);from the short peer evaluations;from a determination of whether or not the studentis currently or frequently online;and from I-Help's observations of student participation inboth the public and private discussions.The public and private discussions may be usedtogether,or the two components may be used independently.Whichever is used,theobvious educational benefit to students is that those requiring help receive assistance at thetime they need it.Furthermore,peers providing help should also benefit from the reflectionnecessary to formulate an acceptable explanation.3.I-Help DeploymentsWe discuss three deployments of I-Help in classes at the University of Saskatchewan:1.Sept.-Dec.1999;2.Jan.-Apr.2000;3.Sept.-Dec.2000.In the first two,I-Help Pub and I-Help 1-on-1 were separate systems.They were integrated in deployment 3.Given theirhistory as distinct sub-systems,we discuss I-Help Pub and 1-on-1 deployments separatelybelow.Deployments 1 and 2 of I-Help Pub allowed students to post questions and answers inthreaded forums,having a structured,organised environment as a benefit to the learner.Deployment 1 had around 600 users.Deployment 2 was available to around 1000 users,but was actually used by about 750.Since email notification reminders to visit forums canincrease usage,this was introduced for deployment 3.However,rather than emailingmessages with reference to all new postings[e.g.3],deployment 3 allowed notifications inreaction to postings of interest(users can request email notification of new postings in aparticular forum,by a particular author,on a particular topic,and responses to a particularposting),to increase the utility of notifications.The other major innovations fordeployment 3 were addition of:multiple views(users can create their own sets of forums,each view forming a single perspective through which to access forums);a search facility(searches can be performed according to topic,keywords or author);choice of English orFrench interface.Deployment 3 was available to 1600 students–all undergraduate coursesin the Department of Computer Science,and to 100 students in two courses in Law,also atthe University of Saskatchewan.Turning to the private discussion component,deployment 1 of I-Help 1-on-1 used asynchronous chat environment.At that time,I-Help sought the single best helper,according to their knowledge of the topic.Knowledge was organised in detailed conceptmaps.The system was able to support about 50 personal agents,and was offered on avoluntary basis to 100 students,but there was very little usage.In deployment 2,synchronous-asynchronous messaging replaced the chat because the previous version wasdependent on the selected helper being online at the time,and willing to engage in the helpsession.For the same reason,I-Help located the top five potential helpers to increase thelikelihood of a quick response.Simple topic labels replaced the concept maps,becausestudents did not want to maintain such a detailed learner model.In addition to knowledgelevel–helpfulness(as evaluated by previous helpees)and eagerness(online activity)weremodelled,and this information was used alongside knowledge level in matching partners.Learners could also create a'friends'list–people from whom they would particularly liketo receive help,and to whom they would offer a discount in the event that they requiredhelp(I-Help agents and students are motivated to interact through a virtual currency–seeSection 5).Users could similarly construct'banned'lists–people with whom they did notwish to interact.Topics could also be banned.The number of personal agents that could besupported was scaled up to about 200.In deployment 2 I-Help 1-on-1 was offered to 322first year computer science students for almost three weeks.Of these,76 individualsregistered to use the system.Among these,some used the 1-on-1 facility extensively;others used it rarely.There were 86 help requests in total over this three week period.Ofthose who were registered for both I-Help 1-on-1 and I-Help Pub at that time,31%usedthe 1-on-1 facility only;38%used I-Help Pub only;and 31%used both.Major extensions were produced to the 1-on-1 system for the third deployment.Asstated above,I-Help 1-on-1 and Pub were fully integrated for the first time.Eagerness,helpfulness and knowledge level were still criteria for matching in I-Help 1-on-1;howeveractivity in I-Help Pub(number of postings read,replies made,etc.)now also contributed tothe eagerness measure.The friends list had two sections–friends who receive a discount,and preferred helpers who receive a premium.The banned list was similarly divided–users could ban individuals as helper,helpee or both.In addition to the previous attributes,learners were able to provide a greater range of information to their agent,for studentmodelling–they could indicate how frequently they were willing to be contacted ashelper;the maximum number of sessions in which they were prepared to be involved atone time;the importance of earning currency;and their ability to help.(The latter was usedalongside peer evaluations of helpfulness.)For the role of helpee,the learner could indicatethe relative importance of the following in a helper:knowledge level,helpfulness,speed ofresponse,cognitive style and currency.These attributes were then weighted appropriatelybefore the initiation of agent negotiations.Deployment 3 had over 400 agents.Due to thistechnology limit,the fully integrated I-Help system(with 1-on-1 and Pub)was madeavailable to 326 students in 2 courses.As discussed further in section 5,there was verylittle usage by the students in one course of either component of I-Help,while usage in theother course was focussed mainly on I-Help Pub.4.I-Help:Software Engineering LessonsIn this section we discuss some of the architectural and software engineering issues thathave arisen as one deployment of I-Help has led to the next.We start with designrequirements for I-Help.We then provide an historical overview at the technology level ofthe various versions of I-Help,showing how technological challenges have led tointeresting solutions as I-Help has become ever more sophisticated.We conclude thesection with a brief overview of some of the main software engineering lessons that wehave learned.Through its various versions I-Help has had three basic requirements:to be accessible,dependable and scalable.To avoid lack of use due to the accessibility problem sometimesexperienced early in a project[e.g.4],I-Help had to be widely available.Since it isrequired to operate in a highly heterogeneous environment,the best solution to theaccessibility problem was to make I-Help available from a simple web-browser.The mainhttp-clients targeted have been Netscape and Internet Explorer.Dependability is the secondrequirement.It has been crucial to ensure that the services offered to students are available,reliable,secure and safe,and that the system does not crash.The third requirement is thatI-Help is able to scale up to allow more students to use it in a wider variety of contexts.Even before the large-scale deployments discussed in section 3,there were several"proof of concept"prototypes of both I-Help Pub and I-Help 1-on-1.Early I-Help Pubprototypes used a public-domain database,ODBC and Perl-cgi scripts.Every page wasgenerated by the server and almost every click required a screen refresh.The early testswith users resulted in such slow performance that they would not use the system.Toachieve scalability and reasonable system performance,it became clear that a commercialdatabase with direct web support was required.After several failed attempts to build areliable Oracle-based application(due to the steep learning curve associated with Oracleapplication development),finally a stable and scalable I-Help Pub was built.This alloweddeployment 1 to proceed.The first I-Help 1-on-1"proof of concept"prototype took a single-process serverapproach.It was written in Java(jdk1.1)and designed to run on a single PC.Theapplication consisted of three modules:a simple communication module(ComServer),anagent host and a module to handle the database connection issues.The agents used in thisimplementation were simple Java threads that reacted to incoming messages.Small appletsembedded in the page ensured a connection of the clients with the application.While alltests indicated a stable system,the first real usage ended in disaster.The sudden loadcaused by simultaneous login of over 60 users within a minute,led to a temporary highdemand of processor power by the DB-Connection module.This meant that the agents hadtoo little power,which led to slow creation of web pages.The reaction of the students tothe decreased performance was a series of logoff-login commands,which lead to anextremely high load,which,in turn,resulted in total collapse of the application.With thisfirst disappointing experience in mind the students refused to work with improved versionsthat year.We clearly had to do better if we were to go beyond a proof of conceptprototype.Thus,the version of the I-Help 1-on-1 architecture in deployment 1 attempted toovercome the problems of resource conflicts by using of RMI to distribute the server-sideapplication.Each module became an independent process.In addition more complexagents that were able to communicate via KQML messages were introduced.Thesecontained simple goal-queues and rudimentary planners.Further,the agents were enabledto observe the current load and plan their activities accordingly.Using this approach it wasdiscovered that the use of applets led to serious problems(because of different Javaversions supported by different browsers and hardware platforms).In addition it turned outthat memory leaks(which do happen in Java!)led to crashes of the agent host.Monitoringthe system and restarting it periodically before memory consumption reached critical levelsensured a minimal degree of stability.Unfortunately,usage of the system peaked onweekends before assignment-deadlines,which resulted several times in crashes at the timeof greatest need.The students reacted to this instability by avoiding the tool.The next implementation of the I-Help 1-on-1 architecture(which underpinned bothdeployments 2 and 3)represented a complete re-implementation of all parts.CORBA wasadopted as an object sharing protocol,since it promised the best standard and the easiestway to ensure a scalable system.This version of the system consisted of a databaseconnection and servlet engine for communication,as well as an agent for each user and auser host.The servlets ensured the connection of the clients with the other parts of theimplementation and replaced the ComServer.In addition a user host was introduced thatwas responsible for handling all user data and also served as a cache for user specific webpages.Each module was implemented in a way that one main process(master)controlledvarious sub-processes.This technique ensured scalability by having several agent hosts anddatabase connection processes.By spreading the processes over several machines,resourceconflicts were avoided.This was the first stable version,which was able to serve up to 400users.Looking at overall software engineering lessons learned in the development of thevarious I-Help prototypes,one important decision was to use a database for most systeminformation,an idea explored first in I-Help Pub.This decision has led to enhanceddependability and robustness.It is also easy to add new information and to find outinformation for a variety of purposes beyond peer matching(for example for our empiricalstudies).However,ORACLE has a very steep learning curve,and since it is a proprietaryproduct,the portability of I-Help is restricted.Another decision that stands out is to embed I-Help in an agent architecture,ideal forscalability and many other things.Off-the-shelf agent solutions were explored but mostsolutions were too limited,involving one process per agent,thus making scalability tothousands of agents an impossible goal.We therefore created our own multi-agentarchitecture named MAGALE[5],and this has proven to be critical to our success ingetting 400 distinct personal and application agents working at the same time.In fact,theMAGALE architecture is an important ingredient to our future plans for this system.As weincorporate more and more I-Help functionality into the multi-agent paradigm,it becomeseasier to modify a particular agent's capability and watch its effects on the system.There is a down side to agents,however.The nature of emergent behaviour resultingfrom large numbers of interacting,semi-autonomous agents means that any notion of"correct"behaviour is very difficult to define.This suggests that there may be no way topredict whether a system will scale up without building it first.In fact even after it hasbeen built and tested with simulated workloads,it is sometimes hard to predict the kind ofworkload that real users might apply.Further,simulated workloads that represent realisticsituations with multi-user distributed systems are themselves very time-consuming anddifficult to build.Often the deployment itself is the first real load test,so on the first day,when hundreds or thousands of students simultaneously log on,there is a real risk of anunpleasant surprise(the sad story of many"dot coms"whose servers failed to handle theload on day one of operation).Another software engineering lesson learned in this project is that a system in constantevolution must be carefully managed during major deployments.Change management andversion control are important issues.There is a great temptation to apply partially testedhot-fixes to code in the running environment.This has caused embarrassment to ourdevelopers on many occasions and caused confusion to our users when new features(ornew bugs)or subtle changes began to appear without adequate explanation.One of thegoals of experimental work with deployed systems is to compare functionality by offeringdifferent versions to different sub-groups of users.For example,two different agentnegotiation algorithms were being used in deployment 3 of I-Help 1-on-1.The differencein behaviour between the two algorithms would be imperceptible to users,but wouldprovide different candidate helpers for a given situation.Adding this kind of newfunctionality is relatively simple if the system is well designed.Clearly,versionmanagement is crucial in all of these situations.An important lesson learned in this areawas to obtain traces of user behaviour and snapshots of learner model states over time sothat post-hoc off-line experiments could be run to simulate real effects.5.I-Help:Usage LessonsDuring the various deployments of I-Help,the main goals were to determine whether ornot:1.the system helps in supporting student learning;2.it stimulates more and betterlearning interactions among the students;3.people learn through helping/explaining toother people.While the final proof that we achieved these goals requires data from manymore deployments,sufficient data has been obtained to support these hypotheses and toreveal interesting insights on educational and social issues.Various kinds of data havebeen collected.In all deployments trace data has been collected by I-Help as learnersinteract.This data has become increasingly fine-grained from deployment to deploymentas we have traced the additional functionality.In addition we distributed questionnairesafter deployments 2 and 3.In deployment 2,we surveyed only the 76 students whoregistered for I-Help 1-on-1,receiving 64 responses.As stated previously,the 1-on-1registrants were fairly evenly split between primarily using I-Help 1-on-1,I-Help Pub andboth.(However,86%felt that the availability of both components was useful–despite thelack of integration at this stage.)In deployment 3 we surveyed some of the first and thirdyear courses to obtain opinions from students at different levels,and from courses withdifferent usage patterns.Of our 538 responses,308 came from students who stated theyhad sometimes,frequently or very frequently used I-Help(others used it only rarely(141)or never(89)).The analysis below is based on the trace data in the three deployments todate,as well as responses to the questionnaires collected in deployments 2 and 3.Table 1:I-Help Pub usage,deployment 3Course totallearnerstotalthreadstotalrepliestotalreadsthreads bylearnersreplies bylearnersreads bylearnersCS 100 343 257 318 23601 173 67%117 37%22108 94%CS 111 348 796 1306 158112 762 96%837 64%151789 96%CS 116 251 28 27 3402 24 86%18 67%3071 90%CS 330 75 162 263 21809 149 92%189 72%20511 94%CS 370 135 260 147 17043 65 25%61 41%14277 84%difficult to build.Often the deployment itself is the first real load test,so on the first day,when hundreds or thousands of students simultaneously log on,there is a real risk of anunpleasant surprise(the sad story of many"dot coms"whose servers failed to handle theload on day one of operation).Another software engineering lesson learned in this project is that a system in constantevolution must be carefully managed during major deployments.Change management andversion control are important issues.There is a great temptation to apply partially testedhot-fixes to code in the running environment.This has caused embarrassment to ourdevelopers on many occasions and caused confusion to our users when new features(ornew bugs)or subtle changes began to appear without adequate explanation.One of thegoals of experimental work with deployed systems is to compare functionality by offeringdifferent versions to different sub-groups of users.For example,two different agentnegotiation algorithms were being used in deployment 3 of I-Help 1-on-1.The differencein behaviour between the two algorithms would be imperceptible to users,but wouldprovide different candidate helpers for a given situation.Adding this kind of newfunctionality is relatively simple if the system is well designed.Clearly,versionmanagement is crucial in all of these situations.An important lesson learned in this areawas to obtain traces of user behaviour and snapshots of learner model states over time sothat post-hoc off-line experiments could be run to simulate real effects.5.I-Help:Usage LessonsDuring the various deployments of I-Help,the main goals were to determine whether ornot:1.the system helps in supporting student learning;2.it stimulates more and betterlearning interactions among the students;3.people learn through helping/explaining toother people.While the final proof that we achieved these goals requires data from manymore deployments,sufficient data has been obtained to support these hypotheses and toreveal interesting insights on educational and social issues.Various kinds of data havebeen collected.In all deployments trace data has been collected by I-Help as learnersinteract.This data has become increasingly fine-grained from deployment to deploymentas we have traced the additional functionality.In addition we distributed questionnairesafter deployments 2 and 3.In deployment 2,we surveyed only the 76 students whoregistered for I-Help 1-on-1,receiving 64 responses.As stated previously,the 1-on-1registrants were fairly evenly split between primarily using I-Help 1-on-1,I-Help Pub andboth.(However,86%felt that the availability of both components was useful–despite thelack of integration at this stage.)In deployment 3 we surveyed some of the first and thirdyear courses to obtain opinions from students at different levels,and from courses withdifferent usage patterns.Of our 538 responses,308 came from students who stated theyhad sometimes,frequently or very frequently used I-Help(others used it only rarely(141)or never(89)).The analysis below is based on the trace data in the three deployments todate,as well as responses to the questionnaires collected in deployments 2 and 3.Table 1:I-Help Pub usage,deployment 3Course totallearnerstotalthreadstotalrepliestotalreadsthreads bylearnersreplies bylearnersreads bylearnersCS 100 343 257 318 23601 173 67%117 37%22108 94%CS 111 348 796 1306 158112 762 96%837 64%151789 96%CS 116 251 28 27 3402 24 86%18 67%3071 90%CS 330 75 162 263 21809 149 92%189 72%20511 94%CS 370 135 260 147 17043 65 25%61 41%14277 84% ,however.The nature of emergent behaviour resultingfrom large numbers of interacting,semi-autonomous agents means that any notion of"correct"behaviour is very difficult to define.This suggests that there may be no way topredict whether a system will scale up without building it first.In fact even after it hasbeen built and tested with simulated workloads,it is sometimes hard to predict the kind ofworkload that real users might apply.Further,simulated workloads that represent realisticsituations with multi-user distributed systems are themselves very time-consuming and of earning currency;and their ability to help.(The latter was used
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值