Puzzle Paragraph
https://dl.acm.org/doi/pdf/10.1145/3613904.3642754
Large Language Models (LLMs) demonstrate dynamic capabilities and appear to understand complex and ambiguous prompts in natural language. However, adjusting interactions with LLMs is challenging for both interface designers and end-users. A central issue is the limited understanding of how human cognitive processes start from goals and form intentions to execute actions. This gap is also overlooked in established interaction models like Norman’s execution and evaluation gap. To address this gap, we theorize how end-users can “imagine” their goals into clear intentions and create prompts to obtain desirable responses from LLMs. By emphasizing three discrepancies related to the unknown, we define the imagining process: (1) what the task should be, (2) how to instruct LLM to perform the task, and (3) what to expect from LLM’s output to achieve the goal. Finally, we provide recommendations to narrow the gap of imagination in human-LLM interactions.
- The theorization of Prompt Engineering?
-
- Three axes
- (1) Specificity of intention, (2) Flexibility of function, (3) Determinism of output
Imagination involves at least three challenges for humans interacting with LLM systems: (1) setting my goals and intentions for LLM to accomplish the task – the capability gap, (2) instructing LLM optimally about my goals (i.e., prompt engineering) – the instruction gap, and (3) expecting from LLM’s output – the intention gap.
This paper formulates a new interaction model for human-LLM interfaces where intentions are actions. Our main contributions are: (1) characterizing the extensive functionality and new challenges in linking intentions and outcomes in transformative LLM natural language interfaces; (2) identifying an updated model of human-machine interaction that specifies the process of imagining execution; and (3) a set of design patterns and guidelines for human-LLM interfaces, along with an analysis of three types of generation tasks.
- There seems to be a seven-stage model by Donald Norman
- Incorporating the process of forming human intentions
-
In much HCI research, the second stage - intention formation - is assumed (Hornbeck and Oulasvirta, 2017). For example, when cutting and pasting text in a word processor or clicking the “bold” font button, how much do we know (or care) about the fundamental intentions that lead users to perform these actions? We believe that this gap from goals to intentions has been overlooked unintentionally in traditional design approaches but emerges as a crucial issue to address in human-LLM interactions. In this section, we explore the overlooked aspect of intention formation during interactions and assume its role in LLM-driven interfaces.
- I see~ (blu3mo)(blu3mo)(blu3mo)
In LLM systems, the link between the user’s intentions and the system’s actions is unclear, and end-users lack the appropriate mental model of LLMs.
- Indeed (blu3mo)(blu3mo)
As a result, interactions with LLMs become challenging for users.In the following section, we will assume how these constraints challenge the intention of interactions to form.