NovelAI Features

⬅ Back to Lander

-

= Detailed Concepts =

This section seeks to explain, in-depth, the various elements that make up NovelAI.

-

Input Field
Here is the Input Field. This section of the interface is used to control the edit history of the Story, as well as being a place to type additions to your Story.



The topmost field is the Story Title. You can click the 🎲 button to generate a story title based on the current context.

The edit history can be thought of as a timeline - every change you make is a step forward in the timeline. Undoing will take a step backward, Redoing will take a step forward, and Retry will make a new attempt at the last Generation. The Retry Tree is basically a way to jump between timelines - any time the AI sends a Generation response that's on the same point as an existing Generation, it will create a new entry in the Retry Tree.

You can edit the story's title by clicking on it.

More information can be found in their dedicated categories below.

Undo ↩
This option will take a step backwards in the edit history.
 * It will never overwrite anything in the edit history, only stepping backwards.
 * This will remove entire sentences at a time from the point you began editing, and will remove entire Generations by the AI.

Redo ↪
This option will take a step forwards in the edit history.
 * It will never overwrite anything in the edit history, only stepping forwards, and always with the most recent entry in the Retry Tree.
 * This functions inversely to Undo - instead of removing entire sentences and Generations, it will return them.
 * This is not affected by the location of your text cursor.

Retry Tree 0


This option provides a list of all the attempted Generations at this point in the Edit History.
 * You can choose the one you felt was the most appropriate.
 * This does not count as a Generation.

📕 Lorebook
Opens the Lorebook window.

Retry 🔁
This option will remove the last Generation, then make a new one.
 * While doing this, it will also create a new entry in the Retry Tree.
 * This does not affect any text you edit - if there's no Generation at this point in the edit history, it's the same as hitting Send.

Send ➡
This option will place all of the text in the input field onto the end of your Story, send all of the Current Context to the AI, then ask the AI to send a Generation.

This is a new step in the edit history, and does not create a new entry in the Retry Tree.

⬆ Return to Page Top

-

= Story Options =



This section of the interface controls settings specific to your currently active Story. There are two tabs here - Story and Options.

AI Model
Allows you to choose between the following models:


 * Euterpe, a fine-tuned Fairseq GPT-13B model.
 * Krake, a fine-tuned GPT-NeoX 20B model. (Only available on Opus.)
 * Sigurd, a fine-tuned GPT-J 6B model.
 * Calliope, the original fine-tuned 2.7B GPT-NEO model used in the Alpha.
 * Snek, a fine-tuned GPT-J 6B model trained specifically for Python output.
 * Genji, a fine-tuned GPT-J 6B model trained for Japanese output based on Japanese Light Novels.

AI Module
AI Modules are data modules that are inserted into the AI's memory in order to influence the text it will generate. These modules reduce the total context space by twenty tokens when in use, but are not tokens in themselves.

Each module is similar to a "mini-fine-tune", a corpus of text that was used to adjust the AI based on how it is written. Different modules have different effects, which depend on your own writing and the ideas, characters and scenarios you write about.

There are three types of modules: Style, Theme and Inspiration.


 * Styles are based on multiple works from the same author.
 * Themes are based on multiple works, from multiple authors, but from the same genre.
 * Inspirations are based on a singular, specific work, from a single author.
 * Special are for specific occasions or have different purposes in mind than normal storywriting.

Experiment to find what works best with what you enjoy and want to write about!

Further information on cleaning and preparing a dataset for use in training AI Modules can be found at the Datasetting for AI Modules section.

⬆ Return to Page Top -

Story's Memory
Injects this text at the top of the context. Helpful to keep the AI on track with important information to keep in mind, as well as the Author, Title, Tags, Genre setup. More information on how to use this can be found in the Injected Text section.

Story's Author's Note
Injects this text three newlines from the bottom of the context. Helpful for immediate instructions for the AI, which gives direction to the AI's generations. More information on how to use this can be found in the Injected Text section.

Story Stats
The Story Stats menu allows you to consult several metrics such as the number of text characters, your use of retries, the size of your retry branches, etc. You may also prune your story in several ways in order to decrease its filesize.


 * Trim Story: Deletes all redo steps, leaving only the current path. You will not able to navigate the Redo Tree anymore, but the filesize of your story will be reduced.


 * Flatten Story: More aggressive version of Trimming. Deletes all Undo steps and branches, leaving you with a single block of text.


 * Reset to Prompt: Reverts your story back to the initial prompt it was started with, deleting everything else.

Exporting the Story
You may export the story as:
 * A Story file, which includes all branches and retries, and can be large!


 * A Scenario file, which is flattened and ready to be imported.


 * A Plaintext file so it can simply be read on most devices.


 * Copy the entire .story file to the clipboard, which can cause lag, this is a lot of data!.

⬆ Return to Page Top

-

Context
View Last Context opens a window which displays all the tokens sent to the AI for the previous generation. This helps you check if anything you feel is important was omitted. View Current Context does the same, but for the input you're about to send.



⬆ Return to Page Top

-

Prompt
The prompt is displayed in cream by default. It is the first piece of text fed into the AI. If you have put anything into the Memory or Author's note, they will be inserted before it in the context before being sent to the AI.



⬆ Return to Page Top

Injected Text
Injected Text is any text that is not part of the story, but part of the context. All of these elements are injected text:


 * The Memory.


 * The Author's Note, or A/N colloquially.


 * Lorebook entries.


 * Ephemeral Context entries.

Fundamentally, All Injected Text works the same way. It's read by the AI, and influences its generation.

Square brackets are recommended mostly for Author, Title, Tags, Genre metadata in Memory. The brackets must be separated from their contents by a space, and everything should be lowercase outside of metadata categories and proper nouns.

There are two important things to consider about injected text:


 * Position determines the strength of the injection's influence. Closer to the end = Stronger. Further at the top = weaker.


 * Style determines how it influences the generation. Generally, you want to stay close to your Story's style, perhaps with minor concessions such as removing determiners, prepositions, etc.

⬆ Return to Page Top

-

Memory
By default, the Memory is inserted at the top of the context, before anything else. Its position may be adjusted for a stronger (closer to the bottom) or a weaker (further to the top) effect. Traditionally used it to make the AI remember broad context elements and the Author, Title, Tags, Genre metadata.



⬆ Return to Page Top

-

Author's Note
The Author's Note or A/N is identical in format and use to Memory, but it is inserted three newlines by default before the last token in the input. It has a greater influence as a result. The A/N's position may be adjusted for a stronger (closer to the bottom) or a weaker (further to the top) effect. Traditionally, it is used to either give immediate instructions, and immediately important information, such as the name of the POV character, the date, etc.



⬆ Return to Page Top

-

Lorebook


The Lorebook allows you to create entries for specific elements in your story. This helps the AI have more information about characters, places, items, concepts and so on.

Click the 📕 button, or Open Lorebook in the story tab on the right, to open the main lorebook window.

You can import a Lorebook file by clicking the 📤 Import button at the top left. The 🖼️ icon lets you import lorebooks embedded in images.

Click the funnel icon to sort entries alphabetically or by timestamp.

Click the 🔍 icon to perform a search.

Basics
Click ➕ Add Entry to create a new entry. Only the content and settings are read by the AI. The entry name is just an identifier.

You will be presented with this window:



Entries
A lore entry is composed of two main elements: Entry Text and Keys.

The Entry Text is what will be injected into the context window when a Key is triggered. Balancing out the density of this content with the remaining space in your context window is important.

The Default Settings will be applied to all new entries, but they can be changed after creation.

You can mass-select entries with the checkbox ✅ to delete🗑️, export📥, or export to image 🖼️ several entries at once.

Keys
Enabled determines if the entry will be inserted in the context if detected. If it is disabled, it won't trigger regardless of keys. This is useful to reduce context cluttering if you don't need details about specific things.

Keys are all the words that the AI will associate with this entry. If the AI reads this word, then the connected entry will be inserted in the context. Keys are case insensitive.

Type the key and press Enter to register it. Keys are case insensitive by default.

To make a Key case-sensitive, preface it with  and close it by  :.

If it is part of a placeholder, add a $ at the very beginning before the first dash.

💡 Lore Generation
Click the 💡 Generate button to enter the Lore Generator.

Categories
Click ➕ Add Category to create a new category.

Categories allow you to organize Lorebook Entries, but also create a Subcontext. You can define default settings for all entries in the category as well.

A subcontext is basically a bubble in the main, complete context. All entries of the category will be inserted, then organized according to their settings, in this bubble only. This helps reduce "spreading out data" by packing similar information together.

The entries' individual placement settings will be applied relative to other entries in the bubble only. i.e If an entry has an placement position of -2 newlines, it will be inserted 2 newlines before the last line of the last inserted entry in the subcontext bubble.

The entire subcontext is then inserted as one block according to its own placement settings.

This is useful if you want to lump a certain type of entries together, or use something like Scaffolding where every type of content has its own settings.

Placement Tab
Accessed by clicking the Placement tab. If this tab is active, you can click Dock Active Tab to the Side to make it show up at all times. Click Undock to remove it.




 * Search Range: Determines how many characters of text will be read by the AI when it looks for lorebook keys.


 * Force Activation: If turned ON, the entry will ALWAYS be in the context (if it can fit in there).


 * Key-Relative Insertion: By default, Lorebook Entries are inserted relative to the top, or the bottom of the text, see Insertion Position. When this toggle is ON, entries are inserted relative to the last occurrence of the Key found in the context.


 * Cascading Activation: When ON, this entry will also look for its keys in other Lorebook entries, the Memory, and the Author's note. Search Range will be disregarded if this toggle is ON.

You may also use a  (a newline marker), which helps isolate the entry further by separating it with a full newline.
 * Prefix & Suffix: These two are intended to work in tandem to allow for lengthier entries without losing coherence when the entry is trimmed. For example, you could add the prefix  and the suffix   to encapsulate the entirety of your entry despite trimming. If your entry read as , and the last sentence was trimmed, it would still read as   despite the end of the entry being trimmed - your prefix and suffix still remain.


 * Token Budget: Keeps this number of tokens in the context window for this entry. This will overwrite other content if necessary! It's recommended to set it a litter lower than the entry's full size.


 * Insertion Order: The higher this number, the earlier the entry is processed. Entries with a low value may be dropped to save space for those with a higher value. If you have three entries, with order 500,0 and -500, they will be processed from highest (500) to lowest (-500).


 * Insertion Position: How far from the top (if positive) or the bottom (if negative) will the entry be inserted in the window. The unit is defined in Insertion Type. It can be a number of tokens, sentences, and newlines.

As an example, if you set it to -3 Newline, then it will insert the entry's text as soon as it finds the third newline, reading back from the bottom of the window. -1 will mean it is always placed at the very bottom of the Context, just as positive 1 will always place it at the very top of the Context. 0 will always be the very top.


 * Trim Direction: If the entry needs to be inserted partially due to lack of room in the context window, should it trim from the beginning towards the end, (top) end towards the beginning, (bottom) or omit the entire entry if it can't fit fully (Do Not Trim)?

Attention: If an entry is inserted after a subcontext, then it may insert itself into that subcontext's text area.

Lorebook Phrase Bias


If this tab is active, you can click Dock Active Tab to the Side to make it show up at all times. Click Undock to remove it.

Explained in Phrase Bias, although this has an extra setting: When Entry Inactive, which enables the biases if the entry is not active! This can help activate things randomly, such as monster encounters.

Phrase biases in entries can otherwise be used to favor nicknames for a person (such as referring to their job, or a shorthand), and terms relative to them.

⬆ Return to Page Top

Why use Brackets?
Bracketed text is used specifically in the fine-tuning material for metadata, which includes Author, Title, Tags and Genre.

Brackets used for metadata, look like this:

As you can see, you can use empty categories, or even omit them outright. Note the spaces next to the brackets! For optimal effect on your story, it is recommended to include the metadata headers in their original orders, even if they are empty. You can omit the first ones if they are empty, but it is better if you keep Genre: at the very least.

Each category is separated by a semicolon and elements in a category are separated by a comma.

It is also usable for:
 * Dates and locations:
 * The name of the POV character
 * To contain text that has a tendency to leak into generations.

Bracketed text is thus best described as being read by the AI as "Pertinent information but not part of the text." This helps it keep things into memory without trying to continue from them as if they were sentences in the text.

Punctuation outside of colons is usually only as part of a chapter/work title.

You can encase only descriptive passages in Injected Text entries if they differ from the usual style of your prose.

Brackets do not notably affect the accuracy of the text - this is Generation Settings at work.

It is generally not recommended to use brackets in Euterpe or Krake for anything outside of the aforementioned purposes.

As a note, if you are using Krake, enable the Preamble in your AI settings to reinforce the Metadata's effect.

⬆ Return to Page Top

Context Viewer
The Context Viewer is a powerful tool to identify what elements were used by the AI in the last generation. This helps you diagnose Memory, Author's Note and Lorebook usage. Check for bloat, trimmed entries, or ones that take too much space using this tool.



Identifiers
Lists the Identifier of each element of the context that describes it's origin from one of the following:
 * Story: From the main text.
 * Memory: From the Memory block.
 * Author's Note: From the Author's Note block.
 * Display Name of a Lorebook Entry: The name that you gave that entry in the Lorebook, not its keys.

Inclusion
Lists if this element is included in the context:
 * Included: Successfully inserted in the context.
 * Partially Included: Inserted in the context but some trimming was performed.
 * Not Included: Insertion in the context failed or was not attempted.

Reason
Lists the reason for this element's inclusion or omission:

Included
 * Default: Reserved to Story, Memory and Author's Note. Included by default.
 * Key Activated: This Lorebook entry was triggered by one of its keys.
 * Forced: This Lorebook entry was activated because it was set to Forced.

Omitted
 * Disabled: This Lorebook entry was omitted because it was disabled.
 * No key: This Lorebook entry was ommitted because it could not find any of its keys in the text.
 * No space: This entry was ommitted because it could not be allocated enough tokens to fit.
 * No text: This entry was deactivated because it contains no text.

Key
Lists the key that triggered this Lorebook entry.

Reserved
Lists the amount of tokens reserved for this entry. This is usually lower than the Reserved Tokens setting of that entry, as that setting is the upper limit.

Tokens
Lists how many tokens are used by this entry solely on its own. Tokenization can cause a couple extra (or sometimes less) tokens to be used when this entry is placed in the text.

Trim Type
Lists how this entry was trimmed. There are four trim steps, which occur in this sequence:
 * Fit the entire entry without trimming. (No Trim) If it doesn't fit, go to the next step:
 * The entry was trimmed to a new line character inside its text. (New Line) If this results in the entry having less than 30% of its allocated token content inserted, go to the next step:
 * The entry was trimmed to a sentence delimited (period, ellipse, semicolon) (Sentence) If this causes the entry to have less than 30% of its allocated token content inserted, go to the next step:
 * The entry is trimmed by the individual token, and then all the content that can fit in the space that remains is inserted. (Token) If this STILL fails, this is likely because the Prefix and Suffix can't fit in the context, so the entire entry is omitted.

Advanced Context Settings


Remember all the advanced settings of the Lorebook? Those are used here, but for the Story, Memory and Author's Note.

These can be accessed in the Advanced Options collapse in the Options tab on the right.

This allows you to:


 * Fine-tune the maximum size of these blocks.


 * Make Memory or the Author's Note get trimmed before the Story by setting them to a lower priority.


 * Change the way these three blocks are trimmed.


 * Force suffixes and prefixes that you won't need to write in the blocks directly.

⬆ Return to Page Top

-

Ephemeral Context


Ephemeral Context entries are effectively time-sensitive context injections. Think Mission Impossible:

Every time you generate text, you perform a step. Ephemeral Context entries wait a certain number of steps, appear, remain for a certain number of steps, and disappear.

The syntax example is as follows:

Several symbols are used to define the type of information specified:


 * {} Contains the block.


 * The first number specifies the exact starting step, if necessary. You can also specify negative steps using -


 * + specifies the delay in steps before activation. +0 will trigger immediately. Adding r to it will make it repeat after the number of steps set passes, even if the entry is still active. As a result, make sure the delay is longer than the duration if you don't want the entry to be always on, if it repeats.


 * ~ specifies the duration of the entry, in steps, before it disables.


 * , followed by + or - specifies the insertion position of the entry, in new lines. + starts from the top the context, - starts from the bottom of the context.


 * : specifies the beginning of the text content of the entry.

Thus:  will add "[Angela's amnesia temporarily dissipates.]" to the context, five new lines from the top of the context, for fifteen steps, starting thirty steps after you set up this entry. Effectively, it'll be on half the time.

You may also add a ! after the first curly brace to be able to specify a temporarily inactive entry. This makes it always present except during the Ephemeral Context's entry duration.

This one will be off half the time, when the other entry is active.

You can also type out Ephemeral Context entries directly in the Input box.

⬆ Return to Page Top

-

= Options Tab =

Generation Presets


In order to make selecting the AI's various generation settings easier, NovelAI offers several generation presets.

Settings are divided into three categories:


 * User: Settings you have defined and saved, or imported.


 * Scenario: Settings that came included in the scenario you imported.


 * Defaults: Settings designed by NAI community researchers.

You can Import a .preset file, or export the currently selected custom preset in the same format.

Use the ➕ button to create a new preset based on the current generation settings.

Use the ✍ button to edit the preset's name.

⬆ Return to Page Top

Generation Options
These settings allow you to adjust the generation settings to your liking. These get really technical so only explore them if you like messing with the finer things. Otherwise, leave them to their defaults; they're usually good as is.

Most of them deal with the Pool of possible tokens. To understand what this means, look at these examples:

could result in "up, down, left, right, across, around, fancy" and so on.

would result in fewer potential matches, such as "hot, bright, with".



Randomness (Temperature)
Imagine the next token for a generation comes out of a bag. Randomness shakes the bag, until one comes out. The most likely will be at the bottom, but shaking it makes other ones have a chance

True to its name, the Randomness setting (or "Temperature") increases the likelihood of less-expected tokens during text generation. This works by dividing logits by the Temperature before sampling. In plain English, this means the next part of the sentence will be more unexpected, as elements that have less of a chance of appearing are granted a greater likelihood of being used.

Output Length
This setting will adjust the approximate number of characters returned at once by the AI in each Generation. It shows the amount of tokens that will be returned multiplied by four to show the average amount of characters. The amount of tokens will not always be exact, but will never fall under this number. Up to 20 additional tokens will be generated to attempt to reach the end of a sentence before the generation ends.

Repetition Penalty
Going back to the bag metaphor, Repetition Penalty checks for tokens that appear too often and throws them out.

Because text generation is based on patterns, repetition is a constant concern. The Repetition Penalty introduces an artificial dampener to the probability of a token depending on the frequency of its appearance in the Current Context.

As such, increasing this value makes a word less likely to appear for each time it shows up in the text. Do take note that this can get really awkward with words that are recurrent in the current context, such as names, or objects being discussed. With high Repetition Penalty, the AI may find itself unable to use a word repeatedly, and will need to substitute it with another which may be inappropriate.

⬆ Return to Page Top

-

Change Settings Order
Allows you to enable or disable sampling types, as well as select the order in which they are processed.

Sampling
Imagine that sampling is a shuffle bag full of tokens. The settings adjust the size of the bag, and the shuffling done.

Top-K Sampling
This setting affects the pool of tokens the AI will pick from by only selecting the most likely tokens, then redistributing the probability for those that remain. The pool will only contain the K most likely tokens. If the setting is set to 10, then your pool will contain the 10 most likely tokens. (Top-10 Sampling).

In plain English, lowering this setting causes more consistent Generations at the cost of creativity.

Nucleus Sampling
Relating to the previous setting, this adds up the probability of each potential Token in descending order of likelihood until it reaches the value specified. This value is an inverse percentage of likelihood for the next Token - therefore, lowering this value creates a smaller subset of probable Tokens.

In plain English, lowering this setting causes more consistent Generations at the cost of creativity.

As an example, if the most likely token has 30% chance, the second 25, the third 20, the fourth 10, the fifth 5, and the sixth 3, and your setting is at 0.9 (90%), then you would do: 30+25+20+10+5 = 90. The sixth most likely token and onwards will be removed from the pool.

Tail-Free Sampling
A tail in this context is the least-likely subset of Tokens to be chosen in a Generation. This alternative sampling method works by trimming the least-likely tokens by searching for the estimated tail's probability, removing that tail to the best of its ability, then re-normalizing the remaining sample.

This method may have a smaller impact on creativity while maintaining consistency. However, take note that it tends to behave strangely if your context does not contain a lot of data.

Consider the setting as "how much you want to keep". High settings lead to larger token pools.

Top-A Sampling
Top-A considers the probability of the most likely Token, and sets a limit based on its percentage. After this, remaining tokens are compared to this limit. If their probability is too low, they are removed from the pool.

The calculation is as follows:

Increasing A results in a stricter limit. Lowering A results in a looser limit.

This means that if the top token has a moderate likelihood of appearing, the pool of possibilities will be large. On the other hand, if the top token has a very high likelihood of appearing, then the pool will be 1-3 tokens at most. This ensures that structure remains solid, and focuses creative output in areas where it is actually wanted.

Typical Sampling
Typical Sampling is complicated to explain, as it uses an advanced concept known as conditional entropy. It calculates an entropy average, shifts the probabilities of tokens, and then checks which values shifted the most. Those are removed from the pool.

Typical is atypical compared to other sampling methods, as it cuts both likely and unlikely tokens, based on their deviation from the expected base line of entropy. Extremes are considered by the math behind the sampling to be too "random" or "noisy", and thus carrying less "information".

Lowering the value makes the thresholds for cutting off tokens harsher. Increasing it loosens the thresholds, allowing for more tokens.

Repetition Penalty Range
Defines the number of tokens that will be checked for repetitions, starting from the last token generated. The larger the range, the more tokens are checked.

Dynamic Penalty Range
When Enabled, the Repetition Penalty is only applied to the story. All text injections (Lorebook, Author's Note, Memory, Ephemeral Context) will be ignored for the purposes of repetition penalty.

Repetition Penalty Slope
The penalty to repeated tokens is applied differently based on distance from the final token. The distribution of that penalty follows a S-shaped curve. If the sloping is set to 0, that curve will be completely flat. All tokens will be penalized equally. If it is set to a very high value, it'll act more like two steps: Early tokens will receive little to no penalty, but later ones will be considerably penalized.

-



Phrase Bias
Phrase Bias allows you to tell the AI to increase or decrease the probability of single or groups of tokens. You may have 1024 biases enabled at once. Any extra will cause generation to fail.

Rather than ban tokens outright, this is more of a direction to produce specific output, or avoid specific outputs without fully eliminating them.

Keep in mind that Phrase Bias is case sensitive. However, a variation of the first token with a space will be considered, even if you do not specify it. (This does not apply if the tokens are ;, :, <, >, &, @, #, %, ^, \n, and the entire range of unicode characters from u3000-u9faf and uff00-uff9f.)

Bias is collected in Groups with a certain Bias Level, which is applied to all phrases and tokens put in the Group.

To create a new Bias Group, click the ➕ button. The currently selected group can be deleted by clicking 🗑️.

To enter a new phrase or token in the Group, enter it in the text box and press Enter.


 * If you wish to insert token IDs (such as [198] for newlines), encase their numerical ID in Square Brackets: []


 * If you wish to insert the exact input, as is, case sensitive and with exact spacing, encase in Curly Braces {}.

Bias Level is a non-linear Scale from -2 (less likely) to +2 (more likely). It is not an arithmetic scale: -2 is much stronger than simply "twice less likely than -1". The opposite is true for +1 and +2. Generally, a bias of -0.25 to +0.25, usually lower, works well for most uses cases.

Ensure Completion After Start
If checked, the AI will make sure Phrases are completed if their first token is generated.

For example, if you’ve given a positive bias to the phrase “blue business suit” with this function turned off, the AI might start generating outputs featuring blue lights, blue planets, or people with blue eyes. With Ensure Completion After Start turned on, the word “blue” will always be followed by the phrase “business suit”.

Unbias after Generation
If checked, the AI will disable the bias for the remainder of the generation once it has been applied once. Following the previous example, the first instance of "blue" will output "blue business suit", but all following instances will work as normal.

Ban Token
Any Tokens added here will have their likelihoods reduced to zero. This means they will not appear in Generations. As this adjusts the relationships between Tokens, this will have an impact on the phrasing chosen by the AI. Be careful about what you ban, because this can heavily disrupt output if used incorrectly.

When you add a new token to the banlist, it also adds any case-sensitive variation of the token, and the token with a preceding space as well.

To prevent that from happening, and exclusively ban this token, add curly braces { } around the token you want to ban before pressing enter. You may have 2048 bans at once, any extra will cause generation to fail.

Banned Tokens
Relating to the previous setting, this field shows every Token currently blacklisted for generation. Clicking one of these tags will remove it from the list.

If you inserted an exclusive token (using curly braces), it will be displayed with brackets around it.

Ban Bracket Generation
At times, you may wish to include hints to the AI that are not considered for text generation. These can be encapsulated in square brackets ([ and ]) to relay information that will affect the Current Context while not being considered part of the actual text.

These most often take the form of hints. For more information on what to put between square brackets, see Keeping Track.

End of Sampling Token ID
If you wish to end your Generations upon reaching a specific Token, simply add it to this field. Doing so will cause the Generation to end prematurely upon generating the Token.

This can be used to trim the output to single sentences by inputting punctuation, or to increase the accuracy of Lorebook entries by pausing Generation when a defined word is reached.

Min EoS Output Length
This adjusts the minimum number of Tokens returned by the AI before having a chance to generate the End of Sampling token. It will never be below this target, but may exceed it.

⬆ Return to Page Top

Token Probability Viewer


'''To enable this feature, you must enable it in the AI Settings panel of the Account Settings window. This will create a 🧠 button, which lets you access the Probability Viewer.'''

The Token Probability Viewer is a powerful diagnosis tool that allows you to see what choices the AI considers before committing to a generation. This is hugely useful for fine-tuning generation settings, and checking why the AI keeps outputting the same response.

The left side of the interface displays the response text. You can switch between displaying the text proper, or the Token IDs. Cooler colors represent tokens with a low likelihood to appear, while hotter colors represent tokens highly likely to appear.

If you click a token, the right panel will update to show you tokens that were considered by the AI, before the generation settings, biases, bans and modules and after them.

Only the 10 most likely tokens are shown. The remainder is grouped up as a whole entity to save on processing time.

⬆ Return to Page Top

Text and File Export
You can export a story as a .story file by clicking the 📩 button when selecting it in the Story Library.

At the bottom of the Story Tab in the settings menu, you'll find various options for saving and loading stories and other data, by clicking on arrow next to To File, you will be able to export to different file formats.


 * Duplicate Story creates a copy of the story in your library and switches to it. The story will have "- Copy" appended to its name.


 * Export Generation Settings exports only the AI settings you have currently selected. This is useful if you find settings you believe are worth sharing.


 * To File allows you to download the story as a .story file, ready for import to NovelAI.


 * As Scenario allows you to download the story as a .scenario file, which will prompt the user to fill placeholders. on import.


 * As Plaintext allows you to download the story as a .txt file, removing all other data (generation settings, etc).


 * To Clipboard will fill the clipboard with the complete JSON content of the story to your clipboard, which can cause a long processing delay on your computer.


 * As Image opens the Screenshot Designer.

Generation Setting Export
You can export generation settings by clicking Export next to Config Preset at the top of the settings tab.

Exporting Lorebooks
You may export the entire story Lorebook as a .lorebook file by clicking the 📥 Export Lorebook button at the top left of the Lorebook window.

Importing
Import File at the bottom of the Story Library is used to import anything created by NovelAI with the exception of themes. This means: .txt, .story, .lorebook or .scenario files. Alternatively, you can drag & drop these files to the main text box to import them.

.preset files must be imported directly from the Config Preset's Import menu, not the Import menu at the bottom of the Options tab.

⬆ Return to Page Top

Screenshot Export


The Screenshot Designer is accessed by clicking As Image in the export dropdown.

The Screenshot Designer helps you create neat visual snippets of your story for sharing on social media, chat services, and much more. There are many elements that can be toggled for your convenience.


 * Show Title toggles the display of the story title at the top of the image.


 * Show Date toggles the display of the date at which the screenshot was taken.


 * Show Pen Name toggles the display of your Pen Name as set in your Account Settings.


 * Show NAI Logo toggles the display of the logo and URL of NovelAI at the bottom of the image.


 * Show Color Highlighting toggles the display of Prompt/User/AI/Modified text highlighting.


 * Show Color Legend (if above is enabled) toggles the display of a legend showing which color represents which type of input.


 * Show AI Model toggles the display of "Written alongside" and the portrait of the AI model in use.


 * Show Background toggles the display of background graphical fluff to the image.

⬆ Return to Page Top

= Content Creation =

Defining Comments
Any line preceded by two pound signs without a space,  are considered comments and will not be read by the AI, which you can use to give information to your end user.

Importing
You can create exportable scenarios from the options menu as well as importing them the same way you import a story, by dropping the file on the webpage or by clicking on the import story button when creating a new story.

NOTE: If you are on IOS, you may encounter issues importing files. Rename the extension to .json to fix this issue.

Importing a scenario with placeholders
When importing a scenario, you may be asked to fill in some information. Those are called placeholders. You can simply edit those fields to your liking.



Turning on Import Settings will import the Generation settings that were used by the Scenario author.

Adding a placeholder
You can add a placeholder anywhere within your story's prompt, memory, author's note and lorebook entries.

Placeholders have to be written in this format:. The placeholder is divided into five parts:


 * order: the order in which the placeholders will be displayed. 1 goes first, then 2, etc.
 * id: the only mandatory part of the placeholder, it has to be unique. If you have more than one instance of the id, it will use the same value for each of the placeholders.
 * default: the default content of the field when importing the value.
 * title: The title used in the placeholder import window to tell the user what they need to fill in.
 * description: the text displayed above the input field when importing the value. If there is no description set, the text displayed will be the id. You may want to word this like a question for ease.

'''You cannot put these characters inside the text fields of the placeholder: $ {} [] # : @ ^ |

Note for the lorebook entries
For the lorebook entries, you can add placeholders on the title of an entry, its descriptions and its keys. If you want to use regex for your keys, you have to prefix the expression with a  for example to match with the name of a character, you have to write

Placeholder-filling Order
Placeholders are requested to be completed by the user in the alphabetical order of their ID. This means that if you start their id with a number that increments with each placeholder, they will be requested in the order of that number.

Example: Job will be requested before Gender, because its ID comes before it.

You can also define the order by preceding the id with a number, followed by a pound sign (#):

If different entries have the same Order number, they will be processed alphabetically according to their id.

Placeholder Table of Contents
You can create a Table of Contents for placeholders where you can insert a large amount of them in advance, allowing you to easily keep track of all the placeholders you have defined.

The syntax is identical to normal placeholders, with these notable differences:


 * It must be inserted at the absolute top of the prompt.
 * The initiating symbol is a percentage sign (%) rather than a dollar sign.
 * Every Placeholder must be on its own new line.



⬆ Return to Page Top

Formats
Formats are different ways of writing Memory, Author's Note and Lorebook entries. Contributor Valahraban wrote an extensive research report on several formats, which goes into great details into various formats, their utility, and how to use them.

'''NovelAI does not recommend, endorse, or otherwise support any format type in particular. Neither does the Unofficial knowledge base.'''

⬆ Return to Page Top

-

= Module Training =

Consult Datasetting for AI Modules for more information on how to prepare files for Module training.

AI Module training can be accessed from the Story Library, then clicking on the Tools button (🧪), then Module Training.

The left part of the interface contains the dataset. Files supported are raw text only, encoded in UTF-8.

Upload all the files necessary for your module with the Select File button. Give your module a Name and Description on the right-hand side.

Choose the model that your module is for. Modules for Sigurd are not compatible with Euterpe!

The Total # of steps needed to train: field will display the estimate number of steps for 100% coverage of all text files. It is not necessary to set the number of training steps to that number. A good soft limit is ~3000 steps. You may also overfit your module by using more than the estimate. This can have problematic or useful effects depending on what you seek to achieve.

Select the number of steps using the slider, then click Train! to start the training. It will take several minutes before the module is produced. Once done, you can save it and import it like any other story, scenario or generation set.

Steps


You get a free allotment of steps per subscription period. Opus tier gets 10.000 steps per sub period. Other tiers get 500. Free Steps renew every month and do not accumulate.

You can purchase paid steps in order to train large modules or train more than your free allotment gives you per month. Paid Steps are permanent until spent.

⬆ Return to Page Top

= Tokenizer =



The Tokenizer is a feature that allows you to check how your text is split into tokens before being sent to the AI. It can be accessed by pressing ALT+T or clicking on the Tools menu button (🧪) then "Tokenizer".

The Text tab will highlight each token in sequence, using a different color to make it more visible.

Token IDs will display the Token Identification Code that the AI uses. This is very useful for banning specific tokens (without any space addition and case insensitivity) using Ban Token ID.

⬆ Return to Page Top

= Lore Generator =

The Lore Generator is a powerful tool that allows you to generate Lore for several types of objects, people, and concepts.

First, select a type from the dropdown, choosing one that fits the type of content you want generated. They may appear a little broad, so see which is the closest fit.

If you need the AI to be aware of elements in the Story, Memory, Author's Note, or Lorebook, open Add Context (Advanced), and tick their respective boxes or enter Keys that will activate Lorebook Entries that you want the AI to be aware of when you generate the Lore Entry.

Once you are ready, type into Input Text:
 * The Name of the element you want generated (or a short description)
 * Pointers, in parentheses, separated by commas.

Here is an example:

Press ▶ or Ctrl/⌘+Enter to generate. Click 🔄 to retry the last generation. You can freely edit the text, and keep generating, just like in the main text editor!

Generation History is similar to the Redo Tree, displaying the last generations for all entries. The list is purged on refresh.

⬆ Return to Page Top