Process Archives – Page 2 of 3 – Once Upon an Algorithm
Process
In Stable Diffusion, CFG stands for Classifier Free Guidance scale. CFG is the setting that controls how closely Stable Diffusion should follow your text prompt. It is applied in text-to-image (txt2img) and image-to-image (img2img) generations. The higher the CFG value, the more strictly it will follow your prompt, in theory. The default value is 7, which gives a good balance between creative freedom and following your direction. A value of 1 will give Stable Diffusion…
Continue reading
Process
Since I started using Stable Diffusion, I have seen a lot of confusion and misinformation on the Internet on what seeds are, how Stable Diffusion uses them and how users can go about using them. So to help clear things up, I am providing this free guide to explain what seeds are and how you can use them to fine tune your generated images. What is a Seed? Here’s the answer, plain and simple: a…
Continue reading
Process
So maybe you’ve heard about ControlNet, or maybe you haven’t but have seen some of the truly amazing images that it has been able to achieve, but what is it? How can you set up ControlNet and start using it yourself in Stable Diffusion? Well luckily this guide is here to help you get started! What is ControlNet and why use it? ControlNet takes the standard img2img tool in Stable Diffusion and ratchets it up…
Continue reading
Process
Although Stable Diffusion is completely free and open source, there are still quite a few barriers to get it yourself. Sure you can get nicely packaged Stable Diffusion based apps such as MidJourney or use online-only text to image generators like Lexica, but you’re here because you want to take the next step and have significantly more control over your images and have access to all of Stable Diffusion’s features. This article will tell you…
Continue reading
Process
If you want to take your AI image generation to the next level in Stable Diffusion to consistently get the same style across many images with different subjects, then training an embedding is worth your while. There are a few situations in which this could be helpful: This tutorial uses screenshots from Stable Diffusion Automatic 1111 v1.5 Web UI under RunPod.io What is an Embedding? The embedding layer encodes inputs such as text prompts into low-dimensional vectors…
Continue reading
Process
Recently OpenAI has released a new model version in GPT3, Davinci-3. This model greatly improves on the formerly most powerful model, Davinci-2. However it also has some oddities that are worth knowing. You can select Davinci-3 in the Model drop down menu on the right side of the Playground web interface. Poetry and the Concept of Rhyming First, the concept of rhyming is much more advanced in the Davinci-3 model, with every couplet having a…
Continue reading
Process
Learn how to generate an original AI character and seamlessly composite it into a unique background using Stable Diffusion and Photoshop
Continue reading
Process
A detailed guide with images on how Stable Diffusion responds to color, tone, saturation and contrast words in your prompts
Continue reading
Process
Learn to create and modify an image generated with Stable Diffusion to get a scene with a specific character using inpainting
Continue reading
Process
Learn about OpenAI’s GPT-3 AI language model settings and how to use it to write stories and other creative works.
Continue reading