Directing AI Generation

= Directing AI Generation =

This section is based on community findings using Sigurd V3. This is based on outdated information and mostly kept as a historical snapshot. Techniques shown here are unlikely to work well in Krake or Euterpe which have updated finetunes.

This is an expert guide. There will be much less hand-holding from now on. Terms and tools will not be explained if they were defined in the feature pages and Advanced Writing, so ''make sure you have fully read these pages before continuing.

Thanks go to Cass for putting together the initial draft, OccultSage for formatting and enriching this, Kalmarr and TravelingRobot for experimentation, and the folks of #community-research who went through hundred of permutations in the name of research!


 * Tags & Metadata
 * Categories
 * Layering


 * Tag Discovery
 * Formatting
 * Tags Associated with an Author
 * Tag-Starting a Prompt
 * Give me a Story, Any Story
 * Seeded Prompt Generation


 * Tips


 * Scaffolding

Tags & Metadata
The finetune team has done a lot of work tagging the NovelAI model's finetune data with metadata such as Genre, Author, or Tags. Additionally, the trained model itself has emergent properties that strongly associates certain words and styles, making them useful as metadata tags.

These tags can have a profound influence on your stories if included in the Author's Note section, or in the prompt itself.

The people in #community-research on the Discord server have been experimenting with various formats. The below list are attribute-tag pairings that can be used to generate prompts.

Please note that Capitalization: Matters! As does whether there is a space before the closing ] or not. The ordering of tags within an attribute can also make a profound difference!

Try various combinations! Some of us have discovered that including prose descriptions associated with the attributes have worked better.

These tags work best with either some context already present in the prompt, or with narrative direction. Capitalization, Bracket use, Spacing, and Order can all make a small or large difference.

(Preliminary findings. All information below is subject to change and may not be accurate for all settings and versions.)

Categories
-  (whether or not it is capitalized can have an influence)

-  (flexible, usually lowercase, adding a space after a word may help)

-  or   (single author may work better)

-  (for a list of characters)

-  (very flexible, try various capitalization and spacing,)

-

-

-  or

-

-  (flexible, can use Name 1/Name 2 or switch order)

-

Layering
Layering categories on each line may produce a stronger effect. You can also use layering to combine two different Genres. You can swap them too!


 * When layering, it seems the AI will prioritize the bottom one (most of the time).


 * Genres may get stronger if you layer them.


 * Using a term on multiple categories may have a stronger effect.

Tag Discovery
The most frequently asked question was, "How do I discover what tags, genres, and authors there are?" This set us out on another journey of discovery.

All testing was done with Sigurd v3 on default settings, with Max Output Length set to 60 tokens. Sigurd v3 is where a lot of the finetune team's tagging and cleanup work happened.

Formatting
The most important thing to do is: enable bracket generation -- if you have this disabled, it will throw out the highest scoring alternates, and go for the highest score without brackets. You should also disable trimming incomplete sentences.

The metadata that the finetune team tagged is of the following format:

If you write your prompt in the above format, you will have very powerful results. Tags are almost always lower-case, and case matters! The semicolon ; delimiter is important, as is spacing.

Tags Associated with an Author
As an example, you want to discover the tags associated with an Author. You should write:

It will result in something like the following:

Authors Associated with a Tag
If you want to know what authors are associated with a tag:

Resulted in the following:

Tag-starting a Prompt
If you start a story prompt with all the metadata attributes, you will have a very powerful kick-start!

Generated the following:

⬆ Return to Page Top

Give me a Story, any Story
If you put the following:  in a prompt all on its own, it will generate surprisingly coherent stories from any random genre.

Further Refinement
You can fill the Author, Tags, or Genre: variables individually, and leave the other ones blank.

Author:

Tags:

Genre:

Author, Tags, and Genre:

Other Attributes
The consensus in #community-research is not as conclusive here, but evidence suggests that any additional metadata that you want to add should be in the same [] as the Author, Tags, and Genre attributes.

⬆ Return to Page Top

Seeded Prompt Generation
You can use a random string of numbers and characters to make a "seed". The AI will use that to create a prompt. Setting Top-K Sampling to 1 (and disabling others), you will be able to see the exact same answer on Sig v3 using the same seed.

Your seed can also be comprised of words, or even a request, although this will severely reduce the scope of randomness.

Tips

 * The big 3 elements are Genre, Tags, and Author. They tend to work better when capitalized (e.g. Genre).


 * The other categories tend to work better lowercase.


 * The "Words" have a different effect depending on if they are capitalized or if there's a space after them.


 * Most categories can have more than one tag, separated by commas or semicolons.


 * Using too many tags on one line may dilute their effect.


 * You can use blank tags too! (e.g. [ Genre: ]).

⬆ Return to Page Top

Scaffolding
Scaffolding is the mystical art of organizing things depending on how relevant they are to what you need right now.

Conceptually, it is very simple. Imagine you have a queue. Every entry takes a ticket and gets in line. The first in line is at the bottom of the context window, then number 1 on top of it, number 2 on top of that, so on and so forth.

All of these settings are set through the Context Viewer and Lorebook.

This is your Insertion Order setting. Story is 0.

What you want to do, is that anything that is very important should be as close to 0 as possible.

In order to make sure you still have room for the story, however, you'll need to make sure you have a certain amount of tokens reserved. By default, the story has 512 tokens. That is half the window on Tablet tier. If you are on Scroll or Opus, you can easily raise that to 1024.

This is your Reserved Tokens setting. You only need to edit these two settings for an easy scaffold.

In order to set it up, create Categories in your lorebook. Each Category should have its Subcontext enabled. The settings you will edit are the settings of the Subcontext only.

Here is an example which Bunray uses for Akyuu's Knowledge for NovelAI.

Note: This scaffold has 1612 tokens allocated. If you are on tablet tier, reduce the categories' reserved tokens to 100 (with 50 for low priority categories).

This will position everything in a stack, just like in the table. Things that should be close together are kept together.

The Prefix field is set to [ and the Suffix field is set to ]\n. This will effectively "encase" all categories in a bracket block, and separate them with a newline.

You can add  to the suffix of the Objects block to reinforce the separation between data and story, but this usually isn't necessary.

OnePunchVAM offered the following sheet. Insertion order and position columns are flipped, to be consistent with NovelAI's.

This means that your story will be limited to 512 tokens, and the rest of the context will be filled with as much information as possible. All Forced Active entries will be inserted as ideally as possible. After that, it will be following the insertion order, going from the closest to 0, to the furthest from 0.

You can adjust that scaffolding to your liking. If you don't need to define certain things, simply spend the token budget on others until you reach 2048-200 total tokens, or 1024-100 total tokens. (The context window is 1900 tokens on Krake.)

Kalmarr offers several Scaffolding examples and more indepth information on his Research Page.

⬆ Return to Page Top