- "text": "\n# smartagents-workshop\nRepository for the Web pages for the upcoming Smart Agents Workshop, e.g., the Call-for-Participation\n\n## General background\n* Thanks to the advancement of HTML5 and related Web technologies, Web applications with speech capability are getting more and more popular for ease of user interaction and richer user experience, such as Apple's SIRI, Google's Voice Assistant, and Amazon's Alexa.\n* Voice agents are one of the essential applications on various devices, especially mobile phones, tablet devices, eBook readers, and gaming platforms. Moreover, the integration of speech capabilities into traditional platforms such as TVs, audio systems, and automobiles was a recent major technical advancement.\n* The role of smart speakers as a portal for IoT services, including smart homes, tends to be overshadowed by the broader capabilities of LLMs.\n* During [one of the breakout sessions at TPAC 2019 in Fukuoka](https://www.w3.org/2019/09/18-voice-minutes.html), there was a discussion about potential needs for improved voice agents for web services.\n* This resulted in the creation of [Issue 221 Proposal: Workshop on User-friendly Smart Agents on the Web - Voice interaction and what's next?](https://github.com/w3c/strategy/issues/221)\n\n## Focus\n* See the current status of the voice-enabled smart platforms integrating multi-vendor services/applications/devices:\n * Interaction with smart devices also in the Web of Things\n * Control from Web browsers\n * Interoperability and access to controls for accessibility/usability, e.g., smart navigation\n* Discuss the gaps in voice interaction technology for global deployment across all languages, including the integration of large language models (LLMs) to enhance natural language understanding and generation.\n\n## Possible topics\n* Clarification of use cases and their requirements\n* Summary of the current status:\n * Overview of existing browser support and platforms\n * Integration of LLMs to enhance browser capabilities and platforms\n * Common interoperability issues among browsers and platforms\n * Missing features in existing APIs\n* Needs of the users and developers:\n * Enhanced interaction with LLMs for improved usability, addressing issues such as:\n * Hallucinations: LLMs may generate outputs that seem plausible but are factually incorrect.\n * Ambiguity in outputs: Inconsistent or vague responses can cause confusion in automated workflows.\n * Lack of accountability: Identifying the root cause of errors in an LLM’s predictions can be challenging.\n* Utilizing advanced voice technology for Web services, including speech style, expression, feeling, and emotion.\n* Managing input entities (sensors/applications) and output entities (actuators/devices/digital twins) from various vendors and their coordination.\n* Addressing presentation issues such as how, what, and when to transfer necessary information from input entities (users, devices, or applications) to output entities (users, devices, or applications).\n* Integrating multiple interchangeable modalities (typing, handwriting, voice, etc.).\n* Underlying technologies:\n * Advanced integration of multiple applications, e.g., via Agentic AI\n * Applications, services, and devices from various vendors\n * Standardized data formats and protocols for data transfer\n * State transition management models and service lifecycles\n * Enhanced models and architectures for voice interaction and their relationship with Web technologies\n* Horizontal platform considerations:\n * Discovery of resources\n * Privacy and security\n * Accessibility and usability\n * Internationalization and compatibility with region-specific technologies\n\n## Examples of related use cases\nThe related technology area is broad, including:\n\n* Voice agents\n* Connected cars\n* Smart homes, smart factories, and smart cities\n* Smart speakers and smartphones as portals/user devices\n\nFor example, Hybrid TV services (Web+TV integration based on HTML5, Second Screen, TTML, etc.) and smart home devices and services, possibly incorporating proprietary technology like MiniApps, can offer the following use cases:\n\n* Asking the voice agent on the TV in the living room to order takeaway, e.g., \"I want to order a pizza.\"\n* Using voice commands to choose the food and saying \"checkout\" to the smartwatch to process the payment.\n\nAnother example is searching for television content. The user can ask, \"Play [name of an episodic TV show],\" and the voice assistant will respond with, \"Here's what I found,\" while displaying search results on the TV. A useful user requirement may be the ability to request congruent user feedback (i.e., if voice is used for input, then speech is used for feedback).\n\n> **NOTE:** The above is just a few examples of the possible use cases, and the development of use cases and their requirements will be one topic of the workshop.\n\n## Who to attend?\n* Many possible stakeholders including:\n * Service providers/System implementers\n * Govt (like Singapore)\n * Users from various countries/communities\n* Liaisons\n * CHIP (Amazon, Apple, Google, and Zigbee)\n * OCF\n * oneM2M\n * Singapore Govtech\n\n<!--\nSee also the [rendered HTML](https://w3c.github.io/smartagents-workshop/)\n-->\n"
0 commit comments