ChatGPT, embedding search, and retrieval-augmented generation for Squeak/Smalltalk
# Squeak-SemanticText
> ChatGPT, embedding search, and retrieval-augmented generation for Squeak/Smalltalk
*Semantics* (from Ancient Greek *sēmantikós*) refers to the significance or meaning of information. While the normal `String` and `Text` classes in Squeak take a syntactic view on text as a sequence of characters and formatting instructions, `SemanticText` focuses on the sense and understanding of text. With the advent of NLP (natural language processing) and LLMs (large language models), the availability of text interpretability in computing systems is expanding substantially. This package aims to make semantic context accessible in [Squeak/Smalltalk](https://squeak.org) by providing the following features:
- **[OpenAI API](https://platform.openai.com/docs/api-reference) client:** Currently supports chat completions, embeddings, and speech transcription/synthesis. Includes tools for managing rate limits, tracking expenses, and estimating prices for queries.
- **SemanticConversation:** Framework for conversational agents like ChatGPT as well as autonomous agents, including function calling.
- **ChatGPT:** Conversational GUI for Squeak. Supports streaming responses, editing conversations, and defining system messages.
- **SemanticCorpus:** Framework for semantic search, similarity search, and retrieval-augmented generation (RAG, aka "chat with your data") through the power of text embeddings. Implements a simple yet functional vector database.
- **Experimental tools** such as an integration of semantic search and RAG into Squeak's Help Browser or Squeak's mailing list.
For more details, install the package and dive into the class comments and code, or continue reading below.
<table width="100%">
<tr>
<td width="35%">
<p>
<strong><a href="#conversations-and-chatgpt">ChatGPT</a></strong><br>
<img alt="ChatGPT: User: Why is Squeak the best programing system in the world (in 3 very short bullets) / Assistant: 1. Live Coding Environment: Squeak offers a dynamic, live coding environment that allows for real-time changes and immediate feedback, enhancing the development and debugging process. / 2. Powerful Object-Oriented Features: It is built on a pure object-oriented paradigm, encouraging clean, modular, and reusable code, which makes it excellent for educational purposes and complex projects alike. / 3. Active Community and Rich Tools: Squeak has a vibrant, supportive community and a wealth of built-in tools and libraries that simplify complex tasks and foster innovation. / User: " src="./assets/ChatGPT.png" />
</p>
<p>
<strong><a href="#editor-integration">Editor Integration: Explain It / Summarize It / Say It</a></strong><br>
<img alt="Context menu on text field with class comment for SemanticCorpus: summarize, explain it, ask question about it..., say it, speak to type" src="./assets/explainIt.png">
</p>
<p>
<strong><a href="#openai-api-expense-watcher">OpenAI API Expense Watcher</a></strong><br>
<img alt="OpenAI API Expense Watcher: self totalExpense of an OpenAIAccount: ¢2.92 + approx ¢24.8" src="./assets/expenseWatcher.png" />
</p>
</td>
<td width="50%">
<p>
<strong><a href="#gui---experimental-help-browser-integration">Help Browser Integration:</a> Semantic Search and Retrieval Augmented Generation (RAG)</strong><br>
<img alt="Help Browser Integration: Semantic Search and Retrieval Augmented Generation (RAG). System reference for ProtoObject: Search Results > 'store floats in a file'. Smart reply (experimental, powered by AI): To store floats in a file in Squeak/Smalltalk, you can use the DataStream class. Here is an example of how to use it: / 1. Create a DataStream object and specify the file name: `stream := DataStream fileNamed: 'floats.dat'.` / 2. Write the floats to the stream: `stream nextPut: 3.14.` `stream nextPut: 2.71828.` `stream nextPut: 1.41421356.` / 3. Close the stream: `stream close.` / To read the floats back from the file, you can use the same DataStream object: ..." src="./assets/HelpSystemSearch.png" />
</p>
<p>
<strong><a href="#squeak-inbox-talk-integration">Squeak Inbox Talk Integration:</a> Similar Conversation Search</strong><br>
<img alt="Squeak Inbox Talk Integration: Similar Conversation Search. [squeak-dev] Some questions and comments regarding notation of floats and scaled decimals. Similar conversations (powered by OpenAI embeddings) / Experimental. May be biased or ineffective. / Numerics question: reading floating point constants / RE: Float equality? (was: [BUG] Float NaN's) / Rounding floats / Decimals as fractions / Bug in Floats? / Float differences / Float precision / ..." src="./assets/SIT-similarConversations.png" />
</p>
</td>
</tr>
</table>
Still under development. More might follow. Feedback and contributions welcome!
## Installation
[Get a current Squeak Trunk image (recommended) or a Squeak 6.0 image (only limited support)](https://squeak.org/downloads/) and do this in a workspace:
```smalltalk
Metacello new
baseline: 'SemanticText';
repository: 'github://hpi-swa-lab/Squeak-SemanticText:main';
get; "for updates"
load.
```
As most functionality is currently based on the OpenAI API, you need to set up an API key [here](https://platform.openai.com/account/api-keys) and paste it in the `OpenAI API Key` preference.
While the OpenAI API is not free to use, you only pay for what you need and there is no surprising credit mechanism. Tokens are really cheap - for instance, you can set a threshold of $5, which is enough for chatting more than 1 mio. words or embedding 50 mio. words (or 42 times the collected works of Shakespeare).
## Usage
### ChatGPT GUI (Conversation Editor)
From the world main docking bar, go to <kbd>Apps</kbd> > <kbd>ChatGPT</kbd>. Type in your prompt and press <kbd>Cmd</kbd> + <kbd>S</kbd>, or press the <kbd>Voice</kbd> for a continuous audio conversation. In the advanced mode, you can also define system instructions and functions that the model can call. Through the window menu , you can also choose a different model or edit further preferences.
### Convenience messages
Check out the `*SemanticText` extension methods on `String`, `Collection`, `SequenceableCollection`, `AbstractSound`, and others. Some examples:
```smalltalk
'smalltalk inventor' semanticAnswer. --> 'Alan Kay'
'It''s easier to invent the future than' semanticComplete. --> ' to predict it.'
#(apple banana cherry) semanticComplete: 5. --> #('date' 'elderberry' 'fig' 'grape' 'honeydew')
Character comment asString semanticSummarize.
Morph comment asString semanticAsk: 'difference between bounds and fullBounds'.
((SystemWindow windowsIn: Project current world satisfying: [:ea | ea model isKindOf: Workspace]) collect: #label)
semanticFindRankedObjects: 20 similarToQuery: 'open bugs'.
'Hello Squeak' semanticSayIt.
SampledSound semanticFromUser semanticToText.
```
### Conversations API
Basic usage is like this:
```smalltalk
SemanticConversation new
addSystemMessage: 'You make a bad pun about everything the user writes to you.';
addUserMessage: 'Yesterday I met a black cat!';
getAssistantReply. --> 'I hope it was a purr-fectly nice encounter and not a cat-astrophe!'
```
You can also improve the prompt by inserting additional pairs of user/assistant messages prior to the interaction (*few-shot prompting*):
```smalltalk
SemanticConversation new
addSystemMessage: 'You answer every question with the opposite of the truth.';
addUserMessage: 'What is the biggest animal on earth?';
addAssistantMessage: 'The biggest animal on earth is plankton.';
addUserMessage: 'What is the smallest country on earth?';
getAssistantReply. --> 'The smallest country on earth is Russia.'
```
##### Function calling
```smalltalk
| conversation message |
conversation := SemanticConversation new.
message := conversation
addUserMessage: 'What time is it?';
addFunction: (SemanticFunction fromString: 'getTime' action: [Time now]);
getAssistantMessage.
[conversation resolveAllToolCalls] whileTrue:
[message := conversation getAssistantMessage].
message --> [assistant] 'The current time is 20:29:52.'
```
##### Configuration
```smalltalk
SemanticConversation new
withConfigDo: [:config |
config temperature: 1.5.
config nucleusSamplingMass: 0.8.
config maxTokens: 200 "high temperatures may cause the model to output nonsense and not find an end!"];
addUserMessage: 'Write a short poem about Alan Kay and Smalltalk';
getAssistantReply --> 'In the realm of silicon and spark,
A visionary left his mark,
Alan Kay, with dreams unfurled,
Birthed a language to change the world.
Smalltalk, a whisper, soft and clear,
A paradigm that pioneers,
Objects dancing, message flows,
In its design, innovation grows.
A windowed world where thoughts collide,
A playground where ideas abide,
From his vision, the seeds were sown,
For the digital gardens we have grown.
So here''s to Kay, a mind so bright,
Who lit the way with insight''s light,
In every line of code, we find,
A legacy that reshapes the mind.'
```
---
You can find more examples (such as message streaming, retrieving multiple responses, and logging token probabilities) on the class side of `SemanticConversation`.
### Agents
A simple agent can be defined like this:
```smalltalk
SemanticAgent subclass: #SemanticSqueakAgent
instanceVariableNames: ''
classVariableNames: ''
poolDictionaries: ''
category: 'SemanticText-Model-Agents'.
SemanticSqueakAgent>>initializeConversation: aConversation
super initializeConversation: aConversation.
aConversation addSystemMessage: 'You are a Squeak/Smalltalk assistant.'.
SemanticSqueakAgent>>eval: aString
"Evaluate a Smalltalk expression in the running Squeak image."
<function: eval(
expression: string "e.g. '(8 nthRoot: 3)-1'"
)>
^ Compiler evaluate: aString
```
Then, i