NovelAI Features

⬅ Back to Lander

-

This section seeks to explain the various phrases you'll encounter while using NovelAI.

Artificial Intelligence
Science fiction has us think of artificial intelligence as something for robots and alien spaceships, but the reality is very different. In this context, an artificial intelligence is quite unlike the natural intelligence exhibited by humans. The phrase that conjures thoughts of robotic assassins and intelligent holographic people is actually artificial general intelligence, or AGI.

The eponymous AI that NovelAI uses is a text transformer. This can be thought of as a very, very advanced form of autocorrect - it chooses words in sequence based on recognized patterns. This works by converting text to Tokens, running a series of pattern recognition tests to determine the probability of the next possible Token in the sequence, then returning the result.

No, you are not talking to a machine with thoughts and feelings. While NovelAI is very advanced, it is purely designed as a literature generation service. At times, you may feel like the AI understands you, and it may even seem emotional. This is it performing a very convincing facsimile of human literature - exactly what it was designed to do.

The AI is only capable of producing text that looks convincingly human, without understanding language on its own.

As such, NovelAI has no understanding of morality, as it has no understanding of anything. Please keep this in mind when experiencing the narratives presented in your Stories, and remember you can always ban certain Tokens from appearing.

⬆ Return to Page Top

-

Tokens
The AI interprets text by converting it to tokens. Tokens are how the AI sees pieces of text. Much like morphemes, Tokens are combined to form words or sentences. Because the AI has no judgment of its own, it relies on evaluating the relationships between Tokens based on its training data, then determining the most likely Token to come next in the sequence.

Think of it as a huge game of probabilities. There is no winner, but there are more likely answers, and the AI picks from those.



For example, a raincoat is composed of two Tokens - the Token for rain and the Token for coat. When put together, the AI recognizes the pattern as raincoat, and evaluates its training material to figure out what Tokens are associated with raincoat. Not all Tokens are whole words - some Tokens are as simple as punctuation marks, spaces, and even partial words like mah.

Because the AI is quite powerful, it's capable of evaluating patterns with up to 2048 Tokens in total. When you hit Send, the Current Context is fed to the AI as Tokens, the AI estimates the most likely next word in the sequence, then repeats the process until the Completion is returned.

Because NovelAI works by identifying links between Tokens, including a Token will cause it to have a higher chance of appearing. This means that writing  will cause the AI to still consider   as a possibility.

This is similar to ironic process theory. Instead, you should phrase in absolutes -  - where possible.

If you wish to experiment with Tokenization, the Author's Note and Memory fields provide an accurate Token count for everything you type within.

⬆ Return to Page Top

-

Input Field
Here is the Input Field. This section of the interface is used to control the edit history of the Story, as well as being a place to type additions to your Story.

IMAGE HERE WHEN UI IS READY

The edit history can be thought of as a timeline - every change you make is a step forward in the timeline. Undoing will take a step backward, Redoing will take a step forward, and Retry will make a new attempt at the last Completion. The Retry Tree is basically a way to jump between timelines - any time the AI sends a Completion that's on the same point as an existing Completion, it will create a new entry in the Retry Tree.

More information can be found in their dedicated categories below.

Undo ↩
This option will take a step backwards in the edit history.
 * It will never overwrite anything in the edit history, only stepping backwards.
 * This will remove entire sentences at a time from the point you began editing, and will remove entire Completions by the AI.

Redo ↪
This option will take a step forwards in the edit history.
 * It will never overwrite anything in the edit history, only stepping forwards, and always with the most recent entry in the Retry Tree.
 * This functions inversely to Undo - instead of removing entire sentences and Completions, it will return them.
 * This is not affected by the location of your text cursor.

Retry 🔁
This option will remove the last Completion, then make a new one.
 * While doing this, it will also create a new entry in the Retry Tree.
 * This does not affect any text you edit - if there's no Completion at this point in the edit history, it's the same as hitting Send.

Retry Tree 0
This option provides a list of all the attempted Completions at this point in the Edit History.
 * You can choose the one you felt was the most appropriate.
 * This does not count as a Completion.

Send ➡
This option will place all of the text in the input field onto the end of your Story, send all of the Current Context to the AI, then ask the AI to send a Completion. * This is a new step in the edit history, and does not create a new entry in the Retry Tree.

⬆ Return to Page Top

-

Story Library
Also known as the "left panel", this part of the interface controls everything outside of your Story. This includes the Settings menu, as well as the list of your Stories.

IMAGE HERE WHEN UI IS READY

The menu icon (Ξ) contains the Settings menu, and is also where you can log out. Below them is the Search Field (🔍), which allows you to search through your Story collection by typing next to the magnifying glass.

Every Story displays its Title, Description, the date it was last edited, and whether or not it is marked as a Favorite. The Title and Description can be modified by switching to the Story, then adjusting them in the Story Options. You can switch to a Story by simply clicking on it in the Story Library.

A new Story can be started by selecting the New Story button (+), found at the bottom of the Story Library.

⬆ Return to Page Top

-

Story Options
This section of the interface controls settings specific to your currently active Story. There are two tabs here - Story and Options.

IMAGE HERE WHEN UI IS READY

Story Title
The name you gave to the Story, and the one you look for when using Search.

Story Description
A short blurb to explain the contents of the Story, or a witty quip to help you identify it at a glance.

Create a new tag
Adds a new tag which you can search for.

Saved Tags
All the tags that apply to this story. You can remove them by clicking in the little cross next to them. The ordering does not matter.

Memory, Author's Note and Context have their own entries in this section.

⬆ Return to Page Top

-

Generation Options
IMAGE HERE WHEN UI IS READY

The Advanced Options have their own dedicated entry in this section.

Randomness (Temperature)
True to its name, the Randomness setting (or "Temperature") increases the likelihood of less-expected tokens during text generation. This works by dividing logits by the Temperature before sampling. In plain English, this means the next part of the sentence will be more unexpected, as elements that have less of a chance of appearing are granted a greater likelyhood of being used.

Max Output Length
This setting will adjust the highest number of Tokens returned at once by the AI in each Completion. It will not always hit this target, but it will never exceed it. Be aware that reducing the Output Length and enabling the Trim AI Responses option may cause undesirable effects such as very short sentences.

Min Output Length
Much like the previous setting, this one adjusts the lowest number of Tokens returned at once by the AI in each Completion. It will never be below this target, but may exceed it. Setting this option towards its maximum also makes the AI more likely to produce long sentences.

⬆ Return to Page Top

-

Context
View Last Context opens a window which displays all the tokens sent to the AI for the previous generation. This helps you check if anything you feel is important was omitted. View Current Context does the same, but for the input you're about to send.

IMAGE HERE WHEN UI IS READY

⬆ Return to Page Top

-

Prompt
The prompt is displayed in gray by default. It is the first piece of text fed into the AI. If you have put anything into the Memory or Author's note, they will be inserted before it in the context before being sent to the AI.

IMAGE HERE WHEN UI IS READY

⬆ Return to Page Top

-

Memory
By default, the Memory is inserted at the top of the context, before anything else. Its position may be adjusted for a stronger (closer to the bottom) or a weaker (further to the top) effect. Use it to make the AI remember context elements.

IMAGE HERE WHEN UI IS READY

⬆ Return to Page Top

-

Author's Note
The Author's Note is similar to Memory, but it is inserted three newlines by default before the last token in the input. It has a greater influence as a result, but try to keep it short to prevent leaking. Leaking is an anomaly where elements of the Author's Note or Memory or their styling "leaks' into the generation, making it look out of place. The A/N's position may be adjusted for a stronger (closer to the bottom) or a weaker (further to the top) effect.

IMAGE HERE WHEN UI IS READY

⬆ Return to Page Top

-

Lorebook
The Lorebook allows you to create entries for specific elements in your story. This helps the AI have more information about characters, places, items, concepts and so on. This feature is not yet implemented. More to come!

IMAGE HERE WHEN UI IS READY

⬆ Return to Page Top

-

Formats
Formats are different ways of writing Memory, Author's Note and Lorebook entries. Contributor Valahraban wrote an extensive research report on several formats, which goes into great details into various formats, their utility, and how to use them.

NovelAI does not recommend, endorse, or otherwise support any format type in particular.

⬆ Return to Page Top

-

Advanced Options
These settings allow you to fine-tune the generation settings to your linking. These get really technical so only explore them if you like messing with the finer things. Otherwise, leave them to their defaults; they're usually good as is.



Top-K Sampling
This setting affects the pool of tokens the AI will pick from by cutting off the least likely Tokens, then redistributing the probability for those that remain. The pool will only contain the K most likely tokens. If the setting is set to 10, then your pool will contain the 10 most likely tokens.

could result in "up, down, left, right, across, around, fancy" and so on.

would result in fewer potential matches, such as "hot, bright, with".

Nucleus Sampling
Relating to the previous setting, this adds up the probability of each potential Token in descending order of likelihood until it reaches the value specified. This value is an inverse percentage of likelihood for the next Token - therefore, lowering this value creates a smaller subset of probable Tokens.

In plain English, lowering this setting causes more consistent Completions at the cost of creativity.

Tail-Free Sampling
A tail in this context is the least-likely subset of Tokens to be chosen in a Completion. This alternative sampling method works by trimming the least-likely tokens by searching for the estimated tail' probability, removing that tail to the best of its ability, then re-normalizing the remaining sample.

This method may have a smaller impact on creativity while maintaining consistency. However, take note that it tends to behave strangely if your context does not contain a lot of data.

Repetition Penalty
Because text generation is based on patterns, repetition is a constant concern. The Repetition Penalty introduces an artificial dampener to the probability of a token depending on the frequency of its appearance in the Current Context.

As such, increasing this value makes a word less likely to appear for each time it shows up in the text. Do take note that this can get really awkward with words that are recurrent in the current context, such as names, or objects being discussed. With high Repetition Penalty, the AI may find itself unable to use a word repeatedly, and will need to substitute it with another which may be inappropriate.

Repetition Penalty Range
Defines the number of tokens that will be checked for repetitions, starting from the last token generated. The larger the range, the more tokens are checked.

Repetition Penalty Slope
The penalty to repeated tokens is applied differently based on distance from the final token. The distribution of that penalty follows a S-shaped curve. If the sloping is set to 0, that curve will be completely flat. All tokens will be penalized equally. If it is set to a very high value, it'll act more like two steps: Early tokens will receive little to no penalty, but later ones will be considerably penalized.

End of Sampling Token ID
If you wish to end your Completions upon reaching a specific Token, simply add it to this field. Doing so will cause the Completion to end prematurely upon generating the Token.

This can be used to trim the output to single sentences by inputting punctuation, or to increase the accuracy of Lorebook entries by pausing Completion when a defined word is reached.

Ban Token
Any Tokens added here will have their likelihoods reduced to zero. This means they will not appear in Completions. As this adjusts the relationships between Tokens, this will have an impact on the phrasing chosen by the AI. Be careful about what you ban, because this can heavily disrupt output if used incorrectly.

Banned Tokens
Relating to the previous setting, this field shows every Token currently blacklisted for generation. Clicking one of these tags will remove it from the list.

Ban Bracket Generation
At times, you may wish to include hints to the AI that are not considered for text generation. These can be encapsulated in square brackets ([** and **]) to relay information that will affect the Current Context while not being considered part of the actual text.

These most often take the form of hints. For more information on what to put between square brackets, see Keeping Track.

Trim AI Responses
A sentence delimiter separates subsequent text from the previous clause. Trimming AI responses to the last delimiter will prevent words from appearing after the latest delimiter in the generation.

Basically, this setting prevents dangling words from appearing like this:

To this:

Keep in mind that this can cause very short outputs depending on the current settings.

⬆ Return to Page Top