[Prev][Next][Index][Thread]

Multimodal



Let's go after this one.


----- Begin Included Message -----

>From <@cis.udel.edu,@cis.udel.edu:mccoy@louie.udel.edu> Wed Sep  6 08:47:25 1995
To: users@asel.udel.edu
Subject: Sharon Oviatt: Call for Papers on ``Multimodal Interfaces"
From: Kathleen McCoy <mccoy@louie.udel.edu>
Content-Length: 6463
X-Lines: 120


------- Forwarded Message

Received: from louie.udel.edu by sol.cis.udel.edu id aa07442; 5 Sep 95 22:56 GMT
Received: from cse.ogi.edu by louie.udel.edu id aa22516; 5 Sep 95 18:42 EDT
Received: from [129.95.40.217] by church.cse.ogi.edu with smtp
	(Smail3.1.29.1 #2) id m0sq6Mj-000KoRC; Tue, 5 Sep 95 15:21 PDT
Message-Id: <m0sq6Mj-000KoRC@church.cse.ogi.edu>
Date: Tue, 5 Sep 95 15:21 PDT
Mime-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
To: announcements.chi@xerox.com, cogsci@cogsci.ed.ac.uk, 
    sigmedia@bellcore.com, wahlster@dfki.uni-sb.de, 
    wesh@winhitc.atlantaga.ncr.com, ht@cogsci.ed.ac.uk, 
    TPAYNE@oregon.uoregon.edu, shriberg@prl.philips.nl, 
    wasow@russell.stanford.edu, berwick@ai.mit.edu, stock@irst.it, 
    maybury@linus.mitre.org, uist.chi@xerox.com, lstreet@advtech.uswest.com, 
    mariani@limsi.fr, Jean_Scholtz@ccm.jf.intel.com, james@cs.rochester.edu, 
    arima@lit.rd.nttdata.jp, isabelle@neural.att.com, 
    herb@lingua2.stanford.edu, dk03@pophost.gte.com, cwolf@watson.ibm.com, 
    bonnie@central.cis.upenn.edu, jmoore@cs.pitt.edu, walker@merl.com, 
    lynette@linus.mitre.org, msm@winhitc.atlantaga.ncr.com, 
    nicoley@argus.east.sun.com, furui@speech-sun.ntt.jp, 
    shieber@das.harvard.edu, sel@research.att.com, 
    Daniel.Reisberg@directory.reed.edu, cfaure@sig.enst.fr, awb@cs.duke.edu, 
    brennan@psych.stanford.edu, moran@parc.xerox.com, brill@cs.jhu.edu, 
    carberry@cis.udel.edu, crosby@uhunix.uhcc.hawaii.edu, duda@ai.sri.com, 
    silvio@jabberwock.swarthmore.edu, duck@cs.colorado.edu, 
    espy@formant.bu.edu, JLF@caip.rutgers.edu, foley@cc.gatech.edu, 
    glinert@cs.rpi.edu, grishman@grimm.cs.nyu.edu, grosz@das.harvard.edu, 
    hardt@monet.vill.edu, hardwick@rdrc.rpi.edu, hodges@cc.gatech.edu, 
    hoffman@borges.unl.edu, rhollis@cs.cmu.edu, hovy@isi.edu, 
    hudson@cc.gatech.edu, ide@cs.vassar.edu, bej@cs.cmu.edu, 
    lafferty+@cs.cmu.edu, ml@lis.pitt.edu, lohse@wharton.upenn.edu, 
    mccoy@louie.udel.edu, b.myers@cs.cmu.edu, olsen@cs.byu.edu, 
    mo@enga.bu.edu, pausch@capa.cs.virginia.edu, ppolson@clipr.colorado.edu, 
    pprice@speech.sri.com, jamesp@cs.brandeis.edu, ben@cs.umd.edu, 
    schubert@cs.rochester.edu, slotnick@pegasus.har.sunysb.edu, 
    soloway@eecs.umich.edu, rohini@cs.buffalo.edu, las@ai.mit.edu, 
    thomason@pogo.isp.pitt.edu, sheryl.young@cs.cmu.edu, 
    zue@mama-bear.lcs.mit.edu, Paolo.Baggia@cselt.stet.it, 
    marcush@cache.crc.ricoh.com, mario@mt.cs.keio.ac.jp, cwolf@watson.ibm.com, 
    stork@crc.ricoh.com, Alfred.Kobsa@gmd.de, rneches@arpa.mil, 
    asears@arpa.mil, gigley@itd.nrl.navy.mil, schen@nsf.gov, gstrong@nsf.gov
From: Sharon Oviatt <oviatt@cse.ogi.edu>
Subject: Call for Papers on ``Multimodal Interfaces"
Cc: oviatt@church.cse.ogi.edu


                           Call for Papers

       Special Issue of the journal Human-Computer Interaction

               on the topic of "Multimodal Interfaces"


A new generation of interactive multimodal systems is beginning to emerge with
advanced interfaces designed to permit people to speak and write in their
native language, to gesture, draw, and point while speaking, to control
devices via human gaze, to interact with 2-D or 3-D visual displays involving
complex graphics, maps, videos, and animated human faces, and receive feedback
via speech and non-speech audio, tactile, and other media.  The goal of
multimodal interface design is to develop more transparent, flexible,
efficient, easy to learn and use, and expressively powerful means of
human-computer interaction, and in a manner that supports more challenging
applications, for use during more adverse conditions, and by a substantially
broader spectrum of the population. Scientific research on the design of
optimal multimodal interfaces, as well as evaluation of their ability to
deliver performance advantages over simpler unimodal alternatives, is the
focus of this Special Issue on "Multimodal Interfaces."

The editors, Sharon Oviatt and Wolfgang Wahlster, are soliciting high quality
manuscripts, either substantive original research contributions or reflective
review papers, that report on some aspect of multimodal interfaces and their
design -- as distinct from reports solely about multimodal system development.
Multimodal interface topics of special interest include (but are not limited
to): (1) issues in the selection of input and output modalities, and their
integration and synchronization, (2) predictive modeling of perceptual, motor,
cognitive, or linguistic aspects of human coordinated use of multiple input
modes, (3) evaluation of the extent to which multimodal system performance
supports human needs and usage patterns, (4) consideration of the overall
interactive input/output cycle between human and computer, including the
effect of system displays, prompts, and feedback on subsequent user input, (5)
topics posing major hurdles to the successful development and use of
multimodal systems in field settings, such as error handling capabilities, and
(6) methodological infrastructure needed to advance research and
implementation of multimodal interfaces.

Human-Computer Interaction is an interdisciplinary journal concerned with
theoretical, empirical, and methodological issues related to the science of
human-computer interaction, with a focus on both the user and sytem design as
it affects the user. Manuscripts are encouraged from diverse perspectives,
including from researchers affiliated with subfields representing component
technologies (e.g., speech, pen, natural language processing, visualization),
as well as by researchers in the HCI community.  Please note that the HCI's
usual reviewing procedures and editorial standards will apply to Special Issue
manuscripts. Submission should include 7 copies of an original manuscript
complete with figures, sent by air mail if outside North America, to:

        Sharon  Oviatt
        Center for Human-Computer Communication
        Computer Science Department
        Oregon Graduate Institute of Science & Technology
        P.O. Box 91000
        Portland, Oregon 97291

Inquiries about this Special Issue should be sent to the Special Issue editor
at oviatt@cse.ogi.edu. Submitters should advise the Special Issue editor of
their intent to submit as soon as they can, but no later than September 15,
1995. The deadline for receipt of manuscripts is October 30, 1995.








------- End of Forwarded Message



----- End Included Message -----