
ControlNet, Stable Diffusion, Automatic1111 & Dreambooth - Consistent characters and poses!
The ControlNet extension and Open Pose Editor for Stable Diffusion is the talk of town! See how you can gain more control in Stable Diffusion with trained Dreambooth models, ControlNet, Open Pose Editor, Automatic1111. This is what all AI artists and Illustrators have been waiting for. This video is created on the Wacom Cintiq Pro 32 with Stable DIffusion, Photoshop, RunDiffusion and Macbook Pro, Atem Mini Pro, After Effects and Adobe Premiere. ------------------------------- SUBSCRIBE ▶ handle: https://www.youtube.com/@levendestreg ▶ You can subscribe to our channel here: https://www.youtube.com/user/levendestreg?sub_confirmation=1 ▶ RunDiffusion Promo Code: levendestreg15 ▶ RunDiffusion reference: https://bit.ly/RunDiffusionLevendeStreg ▶ ControlNet: https://github.com/lllyasviel/ControlNet ------------------------------- 00:00:00 Create consistent characters and poses with ControlNet in Stable Diffusion, Automatic1111, Open Pose Editor and Dreambooth. 00:00:24 What is ControlNet? It is a technology that we could not even have imagined a month ago. 00:00:38 To use ControlNet you need to install the extension and the different models. Please note that the models are huge, so you need probably like 50 gigabytes of storage. 00:01:16 In ControlNet I either plop in an image of the pose I want, or I do a sketch of the pose or I use the Open Pose Editor. 00:01:25 Open Pose Editor. 00:02:16 The Canny model. 00:02:19 The Scribble Model does, and that works well for poses. 00:02:23 Depth creates a kind of depth map. 00:02:28 L-MSD is great for buildings and backgrounds and rooms. 00:02:33 H E D creates sort of a more soft boundaries. 00:02:42 With scribble you import a scribbled image and then create the outline. 00:02:56 Open Pose is the talk of town. 00:03:09 Normal Map creates sort of a greenish purple image. 00:03:17 Hand poses - about hand poses. 00:03:29 What about the colors? 00:03:41 But what about the background? Multi ControlNet.
published 2023-02-25

THIS IS CRAZY!!! Stable Diffusion! ControlNet / Instruct pix-2-pix / QuickFix for DreamBooth!
Check out the new "Instruct Pix-2-pix" model + extension and ControlNet extension. Also Dreambooth is broken! Here's the QUICK FIX! This video is created on the Wacom Cintiq Pro 32 with Stable DIffusion, Photoshop, RunDiffusion and Macbook Pro, Atem Mini Pro, After Effects and Adobe Premiere. ------------------------------- SUBSCRIBE ▶ handle: https://www.youtube.com/@levendestreg ▶ You can subscribe to our channel here: https://www.youtube.com/user/levendestreg?sub_confirmation=1 ▶ RunDiffusion Promo Code: levendestreg15 ▶ RunDiffusion reference: https://bit.ly/RunDiffusionLevendeStreg ------------------------------- ▶ ControlNet: https://github.com/lllyasviel/ControlNet ▶ Instruct Pix-2-pix: https://huggingface.co/timbrooks/instruct-pix2pix ▶ Info on Instruct Pix-2-pix: https://stable-diffusion-art.com/instruct-pix2pix/ ▶ QuickFix for Dreambooth: https://github.com/AUTOMATIC1111/stable-diffusion-webui/discussions/6690 ▶ Load the model in RunDiffusion: aria2c -x 16 https://huggingface.co/timbrooks/instruct-pix2pix" 00:00:00 Be the boss of Stable Diffusion. 00:00:25 First off, what you want to do is that you want to bring up a photo 00:00:51 I can ask this instruct pix2pix model to give the woman sunglasses on 00:01:26 You can read more on HuggingFace and I have a link in the description below for that. 25 00:01:32 Before we can actually get this to work, you need to install the extension for that and the instruct pix2pix model. And here I show you hot to do that. 00:02:14 What you do is you simply click on that extension and click install and reload the web UI. 00:02:46 And the model you need to download - link in the description. 00:03:01 I'll just how to do this on RunDiffusion. Simply toggl on the shell. And I would write "aria2c -x 16 https://huggingface.co/timbrooks/instruct-pix2pix". 00:05:19 If you are just starting out with Stable Diffusion watch this video too. 00:05:33 So this is what we have all been waiting for: ControlNet. It's an official implementation that lets you add conditional control to text to image diffusion models. 00:05:54 What is ControlNet? It's an official implementation that lets you add conditional control to text to image diffusion models. It gives you even more power and control in the image to image tab. It's like a neural network structure to control the fusion models. 00:06:21 Run diffusion has installed ControlNet and it is up and working. 00:06:35 And in the description below, there is a promo code for 15% off When you sign up for Creators Club, so go check that out.
published 2023-02-18

NEW TRICK in Midjourney Nobody Wants You to Know!
The best Midjourney blend AI tutorial. This will show you how to blend three images (and more) together with the blend command. I will also how to guide your prompt for the best result. The blend command can be tricky, so learn more about it here. It's made on the Wacom Cintiq Pro 32 with Stable DIffusion, DALL-E, Photoshop, Google Colab and Macbook Pro, Atem Mini Pro, After Effects and Adobe Premiere. ------------------------------- SUBSCRIBE ▶ handle: https://www.youtube.com/@levendestreg ▶ You can subscribe to our channel here: https://www.youtube.com/user/levendestreg?sub_confirmation=1 ▶ RunDiffusion Promo Code: levendestreg15 ▶ RunDiffusion reference: https://bit.ly/RunDiffusionLevendeStreg ------------------------------- ▶ Style reference spreadsheet: https://docs.google.com/spreadsheets/u/1/d/1h6H2CqjLdZMbLjlz6EHwemfO4fkIzAfWtjRAMSa2KHE/htmlview# ▶ Midjourney Styles and Keywords references: https://github.com/willwulfken/MidJourney-Styles-and-Keywords-Reference/blob/main/README.md ------------------------------- 00:00:00 Are you looking for a better way to guide your prompts when doing blend with Midjourney? 00:00:35 How to add three images or more to your blend command in Midjourney. 00:01:01 So in order to actually blend the three and guide the generation, you need to combine the three images first with the blend command and then afterwards you can tweak the generation by doing a remix. 00:01:37 So in Stable Diffusion you don't need to blend images like that. You can train a model and then generate from any angle and perspective. Actually, there are five ways to train Stable Diffusion. 00:01:45 I have a mind blowing upcoming episode on inpainting - where I'll show you an advanced method of doing inpainting. 00:02:02 In that episode, I will also show you a way better alternative to Google Colab - called RunDiffusion. 00:02:29 So what if I only do the blend command of a cottage and the man? That means only two images. 00:03:04 So instead of blending the images, there is a great way to do that AND guide the prompt as well. Let's look at that. 00:03:14 How to upload images to Midjourney and copy the image address. 00:04:52 It's actually a lot easier to do the /imagine command than the /blend command. And it often gives better results too.
published 2023-02-04

Midjourney - this will change how you write prompts! (Blend feature) FREE alternative to Midjourney!
This is the best Midjourney AI tutorial about the blend feature. And we'll look at a new and FREE alternative for Midjourney. Also I share a really cool workflow hack for when working in Midjourney and a style spreadsheet and a page on Github that will really change how you write prompts. It's made on the Wacom Cintiq Pro 32 with Stable DIffusion, DALL-E, Photoshop, Google Colab and Macbook Pro, Atem Mini Pro, After Effects and Adobe Premiere. ------------------------------- SUBSCRIBE ▶ handle: https://www.youtube.com/@levendestreg ▶ You can subscribe to our channel here: https://www.youtube.com/user/levendestreg?sub_confirmation=1 ▶ Read more: https://levendestreg.dk/en ------------------------------- ▶ Style reference spreadsheet: https://docs.google.com/spreadsheets/u/1/d/1h6H2CqjLdZMbLjlz6EHwemfO4fkIzAfWtjRAMSa2KHE/htmlview# ▶ Midjourney Styles and Keywords references: https://github.com/willwulfken/MidJourney-Styles-and-Keywords-Reference/blob/main/README.md ▶ BlueWillow: https://discord.com/invite/aEYshQE4b3 ------------------------------- 00:00:00 So, Midjourney has powerful blend feature that enables you to mix up two five images together. 00:00:08 Did you know there is a new alternative to Midjourney that is FREE and working on Discord? But I will get back to that in a few minutes. 00:00:36 So I have some images here that I have downloaded from Freepik.com 00:00:53 And then I have simply uploaded that URL I will show you in a moment 00:01:40 So this is how to do image blending in Midjourney. I have created a an image of Athena, God of strategic war, and I am going to try to blend her with an image of me. 00:01:53 So I am going to go in here and click on the image, open image in new tab and simply copy that URL. 00:02:03 Actually, I'm going to head over into my note app 00:02:05 and I'm just going to copy in that link. 00:02:07 Then I am going to grab an image of me 00:02:11 and simply pull it into disk click return. 00:02:15 And that's going to process. 00:02:16 I'm going to click it and then right click it and open image in new tab 00:02:21 and I'm going to grab the URL on that one 00:02:27 And here I'm going to copy that into my note app, too. And they're only divided with a space. So hit spacebar and copy paste that in. 00:02:45 Write /imagine, 00:02:48 and then simply copy in the URLs and hit return. 00:02:59 This blend feature depends very much on you being able to pick up images that actually work well together. 00:03:49 So let's try to guide that generation of image blending. 00:03:54 So again, I am going to write Imagine and then I'm going to copy in the URLs that I had before and I am going to write a comma after the latter of the URLs. 00:04:05 And here I am going to write: Statue of Athena, Greek goddess of war. 00:04:43 If you want to train models, well then I have an upcoming episode on how to do that in Stable Diffusion. 00:05:47 So this spreadsheet shows artists, painting styles, comics, cartoons, creatures, sci-fi style, landscapes and so on. 00:06:14 For a super powerful tip in Midjourney. The preferred suffix value. 00:07:00 So now for the styles and keywords reference page - because this is really cool. 00:08:37 Now for the new and free alternative to Midjourney. Please notice I've not received any payment to say this. It's called BlueWillow. It's an Ai Art Tool that works like MidJourney -
published 2023-01-21

Midjourney - (the secret) to training Midjourney AI?
Is it possible to train Midjourney? so you can create the same character from different angles and perspectives. This is how to do it. The secret to training Midjourney AI, stable diffusion tutorial. And how to do character design with the help of Midjourney, DALL-E and Photoshop. SUBSCRIBE ▶ handle: https://www.youtube.com/@levendestreg ▶ You can subscribe to our channel here: https://www.youtube.com/user/levendestreg?sub_confirmation=1 ▶ Read more: https://levendestreg.dk/en ▶ Download cheat sheet: https://levendestreg.dk/en/tutorials/ ------------------------------- Links ▶ Documentation: https://midjourney.gitbook.io/docs/user-manual ▶ Midourney's website: https://midjourney.com/home ▶ Once logged in click this link to see your images: https://www.midjourney.com/app/ ▶ https://sketchfab.com/tags/head ▶ https://www.youtube.com/watch?v=ravETUa84P8 ▶ https://monstermash.zone/ ▶ https://huggingface.co/spaces/pharma/CLIP-Interrogator ------------------------------- This video is created on the Wacom Cintiq Pro 32 with Macbook Pro, Atem Mini Pro, Midjourney, DALL-E, Premiere Pro, After Effects and Photoshop. 00:00:00 Can you train Midjourney? I mean train it so that you can create the same character in different poses and from different angles? And if that’s really possible - why aren’t everybody else doing it already? 00:00:19 And a lot of you wonderful people have asked about the possibility of somehow training Midjourney in what results you want. Or training it in different characters - or styles that you want to create. 00:00:56 Remember there are links in the description below among other things to a FREE CHEAT SHEET where I share some of my prompts with you. 00:01:25 What AI is the best for my needs? What AI can I get the best results with? And most importantly how can I train AI to create the character or style that I want to create? 00:02:09 But is that possible to achieve that with Midjourney? Well yes and no. 00:02:20 As the guidelines for Midjourney state - you cannot feed images directly into Midjourney like that due to concerns about community public content. 00:02:30 Instead Midjoueny lets you use images as inspiration - the img2img feature, usually along with text, to guide the generation of an image. 00:02:50 And as I explained in this video about character design 00:03:09 First let’s see if we can get Midjourney to give us the cropping and content we want - the right part of the image, so to speak. 00:03:53 I’m going to give you seven different shots to choose from. There are a lot more. You just need to google. But for now, we’ll take a look at seven. 00:04:14 Now first we have the Full shot. Then the Medium full shot, Cowboy shot, Medium shot, Medium close-up, Closeup and Extreme close up. And then you have to choose your angle too. 00:04:54 So now you know the basics of what to ask Midjourney for. The trick here is to actually mention in what angle you see the face in - for instance. And I’ve had good results with mentioning details on the eyes too, so Midjourney doesn’t mess up the eyes. 00:06:05 To get the remix settings switched on you simply write /settings - and toggl on remix. 00:06:48 The term seed sets the seed for an image. So, when you use the term seed, it means that Midjourney will use the same noise or diffusion to create your image from. 00:07:04 And using seed can sometimes help keep things more steady and easier to replicate when trying to generate a similar prompt again or getting the same sort of image. 00:07:15 But - it can also be used for creating the same character in different poses, from different angles and perspectives. 00:07:53 Now how do you find the seed for the image you want then? Well, you add a reaction to the image you want to get the seed from. And you do this by right-clicking on the image. Then you press “add reaction” and then “other reaction”. And here you just start to write env - and that’ll bring up the envelope. 00:08:14 Now Midjourney will send you a private message. And here you get the seed number. 00:08:47 And as you can tell - all of a sudden I have a character to work with. It’s magic. 00:08:55 So, what is same seed? You use the term —sameseed to affects all images of the resulting grid in the same way. 00:09:13 But if you use same seed - the four images will use the same slice. I can you use DALL-E’s paint out tool and Photoshop to do corrections. 00:09:29 If you want to learn more about DALL-E and the paint-out tool well, I’ve got an upcoming episode on that topic. 00:10:16 Because is there an easy way to turn your design into 3D models? The fast answer? Yes - and no! So, of course Adobe has some amazing tool to create 3D models. 00:10:31 And in general 3D has a steep learning curve. But of course there are always a shortcut or two that you can take. 00:10:41 So if you go to https://monstermash.zone/ and upload a 2D image - you can actually turn it into a 3D model - in a fairly simple way.
published 2022-11-19
Download
Unbelievable! What Midjourney v4 Can Do That Changes Everything...
Get yourself up-to-date with Midjourney AI #img2img, #promptengineering, #midjourney v4, #nijijourney, comic book backgrounds, up-scalling with the use of AI and so much more. I've got a bombshell of information for you in this video about where we talk about stable diffusion SUBSCRIBE ▶ handle: https://www.youtube.com/@levendestreg ▶ You can subscribe to our channel here: https://www.youtube.com/user/levendestreg?sub_confirmation=1 ▶ Read more: https://levendestreg.dk/en ▶ Download cheat sheet: https://levendestreg.dk/en/tutorials/ ------------------------------- Relevant Links ▶ https://midjourney.gitbook.io/docs/user-manual ▶ https://midjourney.com/home ▶ https://www.midjourney.com/app/ ▶ https://www.topazlabs.com/gigapixel-ai ------------------------------- This video is created on the Wacom Cintiq Pro 32 with Macbook Pro, Atem Mini Pro, Midjourney, DALL-E, Premiere Pro, After Effects, Gigapixel and Photoshop. 00:00:00 The new Midjourney version FOUR is here! So in today's episode I've got a bombshell of awesome content for you. Not only did Midjourney just release their new version - version four. We'll get you set up - so you can start using it. 00:00:21 But I also promised you guys, that I'd show you how I fix messy background, extend them and upscale them 00:00:31 Let me just note that I'm NOT a Midjourney ambassador. 00:00:40 In this episode I'll also explain a little about copyrights and how I use Midjourney for storyboards and comics. 00:01:15 So the new version of Midjourney is here. Version 4. And you start using it by writing /settings and pressing return - in Discord. This will bring the small menu where you can switch on the version four of MJ. 00:01:40 But, is version 4 so much better than the old version? And how does it fare with comic book style for example? 00:02:09 But when it comes to doing comic book style? 00:02:36 A couple of you wonderful people have been asking question about cropping, formats, dimension, how to get the full view on a face - and stuff like that. And also if there is an an easy way to turn some of your design into 3d models. And yes a couple of you have asked about NijiJourney... And we'll look deeper into ALL that - in the next episode. 00:03:20 Right now we're gonna look at backgrounds and how I use Midjourney for creating backgrounds for animation videos and comic books. 00:03:47 Mostly when I create comics, I actually work in both Adobe Illustrator and Photoshop. But when I have to draw backgrounds, so far - I've often been using Procreate on the iPad. 00:04:00 But lately I've really powered up my Midjourney skills and I've started to create background with the help of Midjourney. 00:04:10 So my process now for creating backgrounds is to start with a good prompt. Do a couple of tweaks with the remix settings switched on. And then I fix my background with the help of DALL-E and Photoshop. 00:04:49 Of course it's totally doable with Photoshop. But it takes a little time sometimes. But fear not. There is a really neat solution to that. You simply use one more AI gadget - like the Gigapixel app. 00:05:32 But if you use Gigapixel you simple go into the app. 00:05:39 And then you chose what kind of image you're upscaling. Now my image is of course a drawing. So I'm using the preset that is best for illustrations. 00:05:55 You can even create some wide shots with the help of DALL-E. 00:06:10 Now of course a skilled artist like me and yourself will notice that a workflow will set off the perspective of the illustration. But I'm going to use this for sort of a pan - for a digital and animated comic book. 00:06:27 Otherwise the solution of course would be to press CMD + T in photoshop and simple adjust the perspective of the building in the image. 96 00:06:36 By the way, if you want to learn more about prompts for comic book characters and character design, 00:06:48 Now what about copyrights? Now before you get your knickers in a twist - I will point out that you should of course NEVER under any circumstances - steal from other artist. That will never make you shin 00:07:29 As an artist myself, I can learn to imitate other artists - or get inspired by other artists. But I would never claim that my art was created by that artist. 00:08:10 Having said that the copyright claims at the moment are such that you cannot claim copyrights on artwork done entirely by AI. There needs to be human skills involved. And also Midjourney still has terms of use that states that if you want to sell the artwork you create, you have to be on the enterprise subscription. 00:08:39 Now let's check out how to create you own server on Discord and get Midjourney to work on it. 00:09:38 Now for keeping you prompts and artwork private. 00:10:14 I hope you enjoyed this video. Please leave a comment below telling me how that worked out for you - and if you have any requests for the upcoming episodes.
published 2022-11-13

Midjourney - comic characters + the SECRET of fixing hands and eyes!
In this video you will learn the SECRET RECIPE to fixing hands and eyes in Midjourney AI and OpenAI DALL-E tutorial. This is about how to do character design with the help of Midjourney, DALL-E and Photoshop. This is a tutorial to learn the best Midjourney img2img and Midjourney prompting. In upcoming episodes I describe the way to achieve the same in Stable Diffusion (automatic 1111 and Invoke). SUBSCRIBE ▶ handle: https://www.youtube.com/@levendestreg ▶ You can subscribe to our channel here: https://www.youtube.com/user/levendestreg?sub_confirmation=1 ▶ Read more: https://levendestreg.dk/en ------------------------------- Links for Midjourney ▶ Documentation: https://midjourney.gitbook.io/docs/user-manual ▶ Midourney's website: https://midjourney.com/home ▶ Once logged in click this link to see your images: https://www.midjourney.com/app/ ------------------------------- This video is created on the Wacom Cintiq Pro 32 with Macbook Pro, Atem Mini Pro, Midjourney, DALL-E, Premiere Pro, After Effects and Photoshop. 00:00:00 Are you struggling with hands, finger and eyes in Midjourney? Maybe your characters aren’t looking as high quality and nice as you would like them to. 00:00:08 In this video I will show you how I use Midjourney for character design for comic books and animation videos to help create a more consistent look across frames and scenes. 00:00:19 And at the end of this video I will also show you to FIX messy hands and eyes in a quick and easy way. 00:00:35 One the hardest things about creating comic book characters (or animation characters) is to create and draw characters that look consistent. 00:00:55 So this video is about character design with with Midjourney, DALL-E and Photoshop. 00:01:27 Now I think by now that most of you know, I love to work with Midjourney. I use it every day in my workflow. And the more I learn - the more thrilled I am about this tool in development. 00:02:02 Also If you’re wondering how it is that my Discord app looks so clean with only my prompts in it, it’s because I’ve setup my own server on Discord - where I can use Midjourney. And in my next video, I will not only share with you how to do that. I will also show you how I use Midjourney for creating storyboards. And how I fix messy background and extend them with the help of artificial intelligence. 00:02:36 Using Midjourney to create comic book characters or animation characters can seem daunting at first. 00:03:00 If you haven’t already, you should check out this video I did on writing the best prompts for Midjourney - and then come back and continue this video. 00:03:15 First thing I need you to do, is to write /settings. This brings up a menu in Midjourney - and you have to toggle on the “remix” button. 00:03:45 So, when we create comic book characters, I will start with telling Midjourney that I want it to create a comic book style image. 00:04:08 And actually I’ve had really good results with the word lookalike. 00:04:14 You might think that using the feature where you insert a linked image (img2img) to create the character from is the best way to go about character design when using Midjourney. 00:04:34 Let me just show you really quickly how it works. 00:04:42 First you take an image and save it onto your laptop. 00:04:45 Then you pull the image into Discord and press return on your keyboard. 00:04:50 Then open up the image in a new browser tab, copy the url and paste it into your prompt. 00:04:56 As I said - I much prefer using the phrase “lookalike” instead of using a specific image for prompts. 00:05:24 And use the same celebrity for your prompts on that character. 00:05:45 Let’s start with —stop! 00:06:04 Then there is the --test expression. 00:06:14 And test p is of course for more photorealistic renderings or 3D animation characters. 00:06:20 So Unreal Engine. You might also want to try out that term, if you’re creating 3D characters. 00:06:37 Aspect ratio is one of the terms that you also need to be familiar with. 00:08:18,965 But what about copyrights in all of this? Well, I’ll dive deeper into that in the next episode...where I’ll also explain how to set up your own server on discord - and keep your designs to yourself. 00:08:46 Now I promised you that I would show you a quick fix for hands and eyes. 00:09:02 The solution to how to fix hands and eyes is actually by going into another AI engine namely DALL-E. 00:09:26 So save your Midjourney image onto your computer. Pull it into DALL-E. 00:09:32 And simply edit the photo by erasing the parts you want the engine to fix. 00:09:37 Then you write into the text bar what it is you want the engine to create for you. 00:09:42 You get 50 free credits your first month with DALL-E. 00:09:45 15 free credits will replenish every month after that, on the same day of the month. 00:09:50 You use one credit to edit an image. And it will often take 3-4 takes for it create something that you can use.
published 2022-11-06

Midjourney - This will change how you write prompts! (SECRET RECIPE)
In this video you will learn the SECRET RECIPE for writing the best prompts for Midjourney AI, stable diffusion - to create stunning and proffesional look artificial intelligence created artwork. This is a stable diffusion tutorial to learn the best Midjourney prompts and stable diffusion prompts. SUBSCRIBE ▶ handle: https://www.youtube.com/@levendestreg ▶ You can subscribe to our channel here: https://www.youtube.com/user/levendestreg?sub_confirmation=1 ▶ Read more: https://levendestreg.dk/en ------------------------------- Links for Midjourney ▶ https://docs.midjourney.com/ ▶ https://github.com/willwulfken/MidJourney-Styles-and-Keywords-Reference/blob/main/README.md ▶ Midourney's website: https://midjourney.com/home ▶ Once logged in click this link to see your images: https://www.midjourney.com/app/ ------------------------------- This video is created on the Wacom Cintiq Pro 32 with Macbook Pro, Atem Mini Pro, Midjourney, DALL-E, Premiere Pro, After Effects and Photoshop. 00:00:00 You will learn the secret recipe of writing the best prompts for Midjourney. And thereby getting way better results with Midjourney. 00:00:08 Are you writing prompts and just hoping for the best? 00:00:19 In this video you will learn the secret recipe for writing the best prompts for Midjourney - and thereby getting way better results with Midjourney. 00:00:27 And at the end of this video I’ll also show you how to get a cleaner view of your Midjourney Discord. 00:00:38 Before we get started let me note that if you need to learn more about how to get started with Midjourney and setting it up, pause this video and go watch this video: https://youtu.be/fNZuyECP3AE 00:00:50 Also if you want to learn more about character design in Midjourney and how to createcomic book characters or animation characters well, that our upcoming episode where we also talk about how to fix the hands and eyes of your creations. 00:01:21 First we need to look at how to build a good prompt. 00:01:31 When you want Midjourney to create images for you, you start with the imagine prompt. 00:02:25 Now a little sidetrack: a lot of creatives are terrified of this new type of intelligence, claiming that it will steal our jobs. Because who will need creatives, if you can just prompt a computer? 00:02:53 So, I don’t think professional creatives need to fear this type of tool. 00:03:06 There are 4 ingredients you need in your prompts for them to work to your advantage. First off you need the “style of”. 00:03:14 Then you need the “main idea”. 00:03:17 Then the artsy lingo and the details. 00:03:19 Let’s start with number two, because it’s the most obvious one. But this is also where most people are getting it wrong. 00:03:42 Now you might think that it’s about making longer prompt, then. 00:03:56 But’s lets start with telling Midjourney that I would like to create something that looks more like a comic book. 00:04:40 But, let’s just get another take on this… where I use an image as image source to create the image from. 00:05:06 Now for the remix settings. Because remix is a fairly new feature in Midjourney but it is essential to your work as it enables you to give feedback to midjourney and tweak your prompts. 00:05:39 So, I told you that Midjourney is weighing your words. And the first words in your prompt are the most important. 00:05:48 1. Therefore you need to first explain WHAT type of image you want. Is it comic book style, is it animation style, is it flat design and so on. 00:05:55 2. Next you want to convey your idea. What is happening in your image. 00:06:00 3. This is where your artsy lingo comes in. Is it a wide shot? Is it birds perspective? And so on. 00:06:06 4. And lastly there are the minor details and extra info you want to inform Midjourney about.That could be mentioning colors, aspect ratio, dimensions, test (for illustrations) or —test p (for photo realistic) 00:06:20,413 There are links in the description below. These will take you to a page on Github, where you can read more about the different expressions to use in your prompts. 00:06:28 AlsoI want to add that I have seen very good results with the term “unreal engine” when I use it for my prompts. It seems to create something that looks a litle 3D-ish. 00:06:43 Also settings. For now try typing / settings and then click return. This will bring you to a small menu with buttons you can toggl on and off. 00:06:51 Then there is stylize. Also you probably want to try out this expression. Because this will prevent the bot from being too creative 00:07:19 --stop is a great expression you can use. 00:07:39 Then there is --uplight. This uses a light upscaler when you upscale images. 00:07:52 Then there is the question of using Discord on your own page without all the noise from other people.
published 2022-10-30

"Stable Diffusion: Which is Better - Invoke or A1111? We Reveal the Unexpected Answer!"
How to get the Infinity Canvas and Stable Diffusion installed locally on your Macbook Pro or PC. This Stable Diffusion tutorial is the best to learn about installing Invoke on your computer. Learn about inpainting and outpainting with infinity canvas and Invoke. Invoke is a strong contestent against A1111 Automatic 1111 as web UI. Learn how to switch between checkpoint models, do img2img prompting, inpainting, outpainting, upscaling and so much more. Like Olivio Sarikas - with focus on the artistically side of the process. This video is created on the Wacom Cintiq Pro 32 with Stable DIffusion, DALL-E, Photoshop, Google Colab and Macbook Pro, Atem Mini Pro, After Effects and Adobe Premiere. ------------------------------- SUBSCRIBE ▶ handle: https://www.youtube.com/@levendestreg ▶ You can subscribe to our channel here: https://www.youtube.com/user/levendestreg?sub_confirmation=1 ▶ Read more: https://levendestreg.dk/en ------------------------------- ▶ Download python: https://www.python.org/downloads/ ▶ Download invoke: https://github.com/invoke-ai/InvokeAI/releases ▶ Invoke guide to install: https://invoke-ai.github.io/InvokeAI/installation/010_INSTALL_AUTOMATED/#walk-through ▶ Xcode install command: xcode-select –install ▶ Mac install Xcode: https://mac.install.guide/commandlinetools/index.html ▶ Mac install Xcode guide: https://mac.install.guide/commandlinetools/4.html ▶ Install command for invoke: /Applications/InvokeAI-installer/install.sh ▶ Github issues: https://github.com/invoke-ai/InvokeAI/issues/1195 ▶ Huggingface tokens: https://huggingface.co/settings/tokens ▶ Infinity canvas: https://huggingface.co/spaces/lnyan/stablediffusion-infinity 00:00:00 Stable Diffusion Infinity Canvas and Invoke! 00:00:22 What is Invoke? Well, it's an alternative to Automatic 1111. And a really good alternative. 00:01:23 All the descriptions of installing Invoke is for PC. But now Luckily for you all - and for the Invoke Team - I've figured out the process on a MacBook - and I'm sharing all that learning with you. 00:01:41 Just a short introduction to Invoke. Invoke is a way of installing Stable Diffusion in an easy manner and it has a really nice interface - or you can prompt directly from the Terminal app. 00:03:16 Now for those of you who are new to this channel my name is Maria Prohazka. I'm the founder and creative director at the small agency Levende Streg. 00:04:5 This install takes about 30 minutes. Then you should be up and running. And it's free. 00:04:57 So first of you need to go to: https://github.com/invoke-ai/InvokeAI/releases. 00:05:42 For installation of python3. Go to https://www.python.org/downloads/. Click the download button. Then double click the dmg and install it. 00:05:56 How to open Terminal on Mac. If you don't know how to Launch the terminal, you just go into your Applications folder / find the Utilities folder and in there you have the terminal app. Just double-click on that. 00:06:06 Then you write: python3 and hit return. And you can see python is now working. 00:06:11 Afterwards you need to go into the invoke installation folder and find the install.sh and open it with a text edit app – I use BBedit. 00:06:29 Now we need to install Xcode Command Line Tools – so open up the terminal and you write: "xcode-select –install" and then you hit return. 00:06:57 So if you need to close down your terminal window to start afresh – you can just write exit and hit return. And then hit cmd + q. 00:07:17 So now we need to run the install-file from inside the Terminal app. You do this by typing /Applications/InvokeAI-installer/install.sh - remember to spell it right. And then click return. 00:07:43 Next you'll be prompted to write the preferred location for the new invoke folder the install will create. And here you write /Applications and then hit return. 00:08:13 Creating the output folder. So here I write /Applications/invokeai/outputs – and I accept that location. 00:08:22 Then allow NSFW checker. And now you need to download different models. I chose to download the required models by pressing "r" and hitting return. 00:08:33 Then you need to accept the HuggingFace license. And then you will need to go and create a HugginFace token. 00:09:03 Terminal will need to run in the background when you use Invoke. 00:09:13 Now when you paste the localhost address in your browser it will bring you to the web UI. 00:09:25 Now for installing and loading models. There is a button for that. 00:11:22 About prompting in Invoke. In Invoke there is a little description in the prompt box 00:11:52 Inpainting and outpainting in Invoke. A bit like combining Photoshop and DALL-E – and then you can create from that via prompting. And that works really well, I think. 00:13:01 Also there is a great solution for trying out the infinity canvas on huggingface: https://huggingface.co/spaces/lnyan/stablediffusion-infinity
published 2023-01-06

Stable Diffusion - this changes everything! (Automatic 1111)
Stable Diffusion - here is the tutorial on the FULL Automatic 1111 dashboard and web UI of Stable Diffusion. Learn about checkpoint merger, checkpoint models, switch between checkpoint models, do img2img prompting, in-painting, upscaling and so much more. The best Stable Diffusion tutorial to get to know all the buttons and sliders and learn how to work like a PRO in Stable Diffusion! It's made on the Wacom Cintiq Pro 32 with Stable DIffusion, DALL-E, Photoshop, Google Colab and Macbook Pro, Atem Mini Pro, After Effects and Adobe Premiere. ------------------------------- SUBSCRIBE ▶ handle: https://www.youtube.com/@levendestreg ▶ You can subscribe to our channel here: https://www.youtube.com/user/levendestreg?sub_confirmation=1 ▶ Read more: https://levendestreg.dk/en ------------------------------- ▶ Google Colab - Stable Diffusion: https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb ▶ Prompt template: https://github.com/Dalabad/stable-diffusion-prompt-templates ▶ Templates: https://github.com/Dalabad/stable-diffusion-prompt-templates ▶ Gif: https://tenor.com/da/view/wonder-woman-gif-20319406 ------------------------------- 00:00:00 Welcome back - I hope you have had a wonderful Christimas. 00:00:08 Today I’m going to look at the dashboard in Stable Diffusion Automatic 1111, I’ll show you how to use the checkpoint merger, switch between checkpoint models, do img2img prompting, in-painting, upscaling and so much more. stable Diffusion on Google Colab. 00:01:00 I’m running the 1.5 version of Stable Diffusion on Google Colab. 00:01:07 Use Runpod to use for Stable Diffusion 2.1. 00:01:38 How to run and setup Google Colab. And remember to fill in the path to the folder with your models. 00:03:07 Let’s start by going into the settings. Now there are lots of settings - and here you can also choose the path, 00:03:52 Txt2img tab in Stable Diffusion Automatic 1111. 00:04:03 My name is Maria Prohazka - founder and creative director at Levende Streg agency. 00:04:50 The prompt box in the txt2img tab. 00:05:03 How to use brackets and square brackets. 00:06:15 Samplers in Stable Diffusion Automatic 1111. 00:07:00 How to do upscaling in the extra tab. 00:07:58 Text2img tab restore faces and tiling. 00:10:00 CFG scale - how to work the scale. 00:10:21 Seed and seed number. 00:10:52 Extra tab and upscaling. 00:12:48 Img2img tab. 00:14:21 Inpainting. 00:16:49 What is Invoke? 00:17:25 Checkpoint Merger in Automatic 1111. 00:19:00 Save a prompt as a style.
published 2022-12-26

Stable Diffusion - this is a gamechanger! How to train AI (Google Colab, Automatic 11 11!)
▶In this video you will learn the SECRET RECIPE to setting up stable diffusion to run on any device. Stable diffusion with Google Colab tutorial. And how to do comic book character design with the help of Stable diffusion, DALL-E and Photoshop. This is a stable diffusion tutorial to learn the best setup and get started with ai generated art. It's made on the Wacom Cintiq Pro 32 with Stable DIffusion, DALL-E, Photoshop, Google Colab and Macbook Pro, Atem Mini Pro, After Effects and Adobe Premiere. ------------------------------- SUBSCRIBE ▶ handle: https://www.youtube.com/@levendestreg ▶ You can subscribe to our channel here: https://www.youtube.com/user/levendestreg?sub_confirmation=1 ▶ Read more: https://levendestreg.dk/en ▶ Download cheat sheet: https://levendestreg.dk/en/tutorials/ ------------------------------- ▶ Astria: https://www.strmr.com/ ▶ Google Colab - Stable Diffusion: https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb ▶ https://www.mage.space/ ▶ Templates: https://github.com/Dalabad/stable-diffusion-prompt-templates ▶ Gif: https://tenor.com/da/view/wonder-woman-gif-20319406 00:00:00 How to setup Google Colab, Automatic 11 11 and Stable Diffusion 00:04:37 So before we get down and dirty with Google Colab - just head on over to the Astria website. The link is in the descritpion below - and here I want to train my own model. 00:05:41 So, how would I go about creating a comic book character? Ideally I would create about 20 comic book illustrations of the character that I want to train Stable Diffusion in 00:06:26 So in Astria I just upload 20 pictures of me. They have to have the aspect ratio 1:1. And they have to be from different angles and perspectives. You need shots that create a 360 degrees view of you character. 00:06:59 But back to Astria and Stable Diffusion. 00:07:20 First thing you want to do, is upload your new model-file to your Google Drive. 00:07:27 When that's done, head on over to Google Colab - the links is in the description. And this will take you to a browser page with different bits of code that we need to execute. It's really super simple and safe. 00:07:50 So first step is to connect your Google Drive. 00:08:00 Next step is to run Automatic 11 11. So click the playhead. And you will see the green checkmark. 00:08:07 Third step is run the requirements code. 00:08:10 And then we come to the model. Here you can choose between Stable Diffusion 1.5, 2.1 and 2.1. 00:08:18 And under there you can also Insert the full path of your trained model. 00:08:24 So I open up the folder structure on the lefthand side. Here I click into MyDrive - and I find my .ckpt file. Click on the three dots and copy the path to that. 00:08:35 And then insert that path into the code. Now click the playhead, and it’s gonna give you a green checkmark. 00:08:40 And lastly you click on the fifth playhead. 00:08:46 When it's done - you're NOT gonna get a green checkmark for this because this has to keep running as you use Stable Diffusion. 00:08:55 So now you can simple click on the link down here below. And that's gonna open up that user interface that you've probably seen many times before in other toturials. 00:09:22 And now in Stable Diffusion you’ll notice that my model has been loaded. 00:09:36 Now for a way simpler method of getting comfortable with Stable Diffusion - if you don't want to use Google Colab. Head on over to https://www.mage.space/ - the link is in the description. You cannot load in a different model - but you can get a feel of Stable Diffusion. 00:10:04 And now moving on to NijiJourney. 00:10:21 But what you can do - if you want to be able to find you work between all the other artists works - is to DM the bot. What Maria? How do I do that? 00:10:36 Easy breezy. You simple go into the NijiJourney server. There you prompt. Then follow closely and when your image has been created, you click on it and add the envelope. This will create a DM (direct message) to the bot. Then you go up here on the Discord icon in the corner. And then you just click here on the bot. And in here, you can just star creating your new prompts.
published 2022-12-17

Midjourney - comic characters + the SECRET of fixing hands and eyes!
In this video you will learn the SECRET RECIPE to fixing hands and eyes in Midjourney AI and OpenAI DALL-E tutorial. This is about how to do character design with the help of Midjourney, DALL-E and Photoshop. This is a tutorial to learn the best Midjourney img2img and Midjourney prompting. In upcoming episodes I describe the way to achieve the same in Stable Diffusion (automatic 1111 and Invoke). SUBSCRIBE ▶ handle: https://www.youtube.com/@levendestreg ▶ You can subscribe to our channel here: https://www.youtube.com/user/levendestreg?sub_confirmation=1 ▶ Read more: https://levendestreg.dk/en ------------------------------- Links for Midjourney ▶ Documentation: https://midjourney.gitbook.io/docs/user-manual ▶ Midourney's website: https://midjourney.com/home ▶ Once logged in click this link to see your images: https://www.midjourney.com/app/ ------------------------------- This video is created on the Wacom Cintiq Pro 32 with Macbook Pro, Atem Mini Pro, Midjourney, DALL-E, Premiere Pro, After Effects and Photoshop. 00:00:00 Are you struggling with hands, finger and eyes in Midjourney? Maybe your characters aren’t looking as high quality and nice as you would like them to. 00:00:08 In this video I will show you how I use Midjourney for character design for comic books and animation videos to help create a more consistent look across frames and scenes. 00:00:19 And at the end of this video I will also show you to FIX messy hands and eyes in a quick and easy way. 00:00:35 One the hardest things about creating comic book characters (or animation characters) is to create and draw characters that look consistent. 00:00:55 So this video is about character design with with Midjourney, DALL-E and Photoshop. 00:01:27 Now I think by now that most of you know, I love to work with Midjourney. I use it every day in my workflow. And the more I learn - the more thrilled I am about this tool in development. 00:02:02 Also If you’re wondering how it is that my Discord app looks so clean with only my prompts in it, it’s because I’ve setup my own server on Discord - where I can use Midjourney. And in my next video, I will not only share with you how to do that. I will also show you how I use Midjourney for creating storyboards. And how I fix messy background and extend them with the help of artificial intelligence. 00:02:36 Using Midjourney to create comic book characters or animation characters can seem daunting at first. 00:03:00 If you haven’t already, you should check out this video I did on writing the best prompts for Midjourney - and then come back and continue this video. 00:03:15 First thing I need you to do, is to write /settings. This brings up a menu in Midjourney - and you have to toggle on the “remix” button. 00:03:45 So, when we create comic book characters, I will start with telling Midjourney that I want it to create a comic book style image. 00:04:08 And actually I’ve had really good results with the word lookalike. 00:04:14 You might think that using the feature where you insert a linked image (img2img) to create the character from is the best way to go about character design when using Midjourney. 00:04:34 Let me just show you really quickly how it works. 00:04:42 First you take an image and save it onto your laptop. 00:04:45 Then you pull the image into Discord and press return on your keyboard. 00:04:50 Then open up the image in a new browser tab, copy the url and paste it into your prompt. 00:04:56 As I said - I much prefer using the phrase “lookalike” instead of using a specific image for prompts. 00:05:24 And use the same celebrity for your prompts on that character. 00:05:45 Let’s start with —stop! 00:06:04 Then there is the --test expression. 00:06:14 And test p is of course for more photorealistic renderings or 3D animation characters. 00:06:20 So Unreal Engine. You might also want to try out that term, if you’re creating 3D characters. 00:06:37 Aspect ratio is one of the terms that you also need to be familiar with. 00:08:18,965 But what about copyrights in all of this? Well, I’ll dive deeper into that in the next episode...where I’ll also explain how to set up your own server on discord - and keep your designs to yourself. 00:08:46 Now I promised you that I would show you a quick fix for hands and eyes. 00:09:02 The solution to how to fix hands and eyes is actually by going into another AI engine namely DALL-E. 00:09:26 So save your Midjourney image onto your computer. Pull it into DALL-E. 00:09:32 And simply edit the photo by erasing the parts you want the engine to fix. 00:09:37 Then you write into the text bar what it is you want the engine to create for you. 00:09:42 You get 50 free credits your first month with DALL-E. 00:09:45 15 free credits will replenish every month after that, on the same day of the month. 00:09:50 You use one credit to edit an image. And it will often take 3-4 takes for it create something that you can use.
published 2022-11-06