Well, Fedora 40 here as well and it just doesn’t work on my computer. Sure, Nvidia, blah blah blah. X does work flawlessly on my machine, though.
Well, Fedora 40 here as well and it just doesn’t work on my computer. Sure, Nvidia, blah blah blah. X does work flawlessly on my machine, though.
Like trying to destroy people’s lives so they can make a few dollars.
I did this with a suitcase lock once, luckily only 3 digits. The code was 587. I remembered the code at around 540.
Put a page on your website saying that scrapping your website costs [insert amount] and block the bots otherwise.
deleted by creator
This is the prompt:
A {cyberpunk|dieselpunk|steampunk|solarpunk} wallpaper, black and {violet|crimson|blue|dark green|gray} colors, evening {landscape|cityscape|towers} Surreal Cubist Expressionism A distorted, fragmented figure in a red and orange hue, its features melting and shifting like wax, swirls in a vortex of swirling purple and green patterns. The background is a deep, blood-red, with stark white accents and splashes of primary colors. In the foreground, jagged shards of glass and shattered metal protrude from the canvas, as
The first part was written by hand, the part starting Surreal Cubist Expressionism
was created by Llama3. Model is Stable Diffusion XL
, more specifically AlbedoBase XL
.
Nah, php over python any day. Equally easy to start, equally fucked up core, but the ecosystem around it is so much saner and easier. And I’d argue it’s even easier for beginners.
Unless you need something that only has python bindings, I’d never choose python.
Well, sometimes you don’t want to do. But yeah, overall you’re right.
ChatGPT uses Dall-E, which wouldn’t be my first choice. Its only advantage over Stable Diffusion is that you can use natural human language. But learning to prompt Stable Diffusion is not that hard.
Edit: And Flux beats both Dall-E and Stable Diffusion. And you can also use natural language with that, if I recall correctly.
I’ve seen some “normal” looking person made by AI, though it’s definitely a minority of the images.
One of my favourites that’s not known very much is Meg Myers. She puts raw emotion into her music.
Well, yes. Not sure what did I miss?
Same! I never understood the criticism. Yeah, all of his dialogues are awkward. But they perfectly fit his character.
When you order a chosen one and a Chewbacca from Wish.
Does someone consider Jedi the good guys? I found them awful since I was a kid.
I was seriously waiting for “So uncivilized”.
Sorry for the late reply! I couldn’t check when I noticed the comment and then I forgot. The model is AlbedoBase XL (SDXL)
.
Here are the full generation parameters:
Prompt: artistic, abstract, lines, light colors
Negative prompt: ugly, deformed, dark
Model: AlbedoBase XL (SDXL)
Sampler: k_dpmpp_sde
Karras: Yes
CFG scale: 7
Size: 1024x512 px
Steps: 30
CLIP skip: 1
If you don’t want to start with local software, I can recommend AI Horde (for example through Horde NG) which is a service where you get free access to a cluster of volunteer Stable Diffusion (and Stable Cascade) workers.
You can use those models commercially and all of them can be used locally. The models are owned by those who created them, if you’re asking about a model you modify and train yourself, then it really depends on the original license, but IIRC for Stable Diffusion, you own the model, but you also have to license it under the same conditions as the original model.
Who owns the generated image is much less clear and really depends on the country. For example in my jurisdiction such an image is not a copyrightable work, meaning no one owns it.
Anyway, using AI Horde is a good start because it does not require that you learn how to use the models locally but at the same time it’s not a watered down service like most AI services are. And it’s completely free.
Not sure what you mean? That I posted it to 3 separate communities?
I was talking about Nintendo, they constantly sue people (and other companies) for obscure amounts of money just because they’re rich and can afford it.