/aes/ - Aesthetics

I'm impressed!


New Reply
Name
×
Subject
Message
Files Max 5 files10MB total
Tegaki
Password
Don't Bump
[New Reply]


ComfyUI_00220_.png
(1.4MB, 1024x1024)
Resurrecting it here because Eris told me so.

/lmg/ - a general dedicated to the discussion and development of local language models.

Last 4chan thread: https://desuarchive.org/g/thread/105051967/

Rival threads:
>8chan moe board ais thread 6258
>meta 4chan gay tech 67288

►News
4chan dood. Wat nou?
>(2025/04/14) GLM-4-0414 and GLM-Z1 released: https://hf.co/collections/THUDM/glm-4-0414-67f3cbcb34dd9d252707cb2e
>(2025/04/14) Nemotron-H hybrid models released: https://hf.co/collections/nvidia/nemotron-h-67fd3d7ca332cdf1eb5a24bb
>(2025/04/10) Ultra long context Llama-3.1-8B: https://hf.co/collections/nvidia/ultralong-67c773cfe53a9a518841fbbe
>(2025/04/10) HoloPart: Generative 3D Part Amodal Segmentation: https://vast-ai-research.github.io/HoloPart

►News Archive: https://rentry.org/lmg-news-archive
►Glossary: https://rentry.org/lmg-glossary
►Links: https://rentry.org/LocalModelsLinks
►Official /lmg/ card: https://files.catbox.moe/cbclyf.png

►Getting Started
https://rentry.org/lmg-lazy-getting-started-guide
https://rentry.org/lmg-build-guides
https://rentry.org/IsolatedLinuxWebService
https://rentry.org/tldrhowtoquant

►Further Learning
https://rentry.org/machine-learning-roadmap
https://rentry.org/llm-training
https://rentry.org/LocalModelsPapers

►Benchmarks
LiveBench: https://livebench.ai
Programming: https://livecodebench.github.io/leaderboard.html
Code Editing: https://aider.chat/docs/leaderboards
Context Length: https://github.com/hsiehjackson/RULER
Japanese: https://hf.co/datasets/lmg-anon/vntl-leaderboard
Censorbench: https://codeberg.org/jts2323/censorbench
GPUs: https://github.com/XiongjieDai/GPU-Benchmarks-on-LLM-Inference

►Tools
Alpha Calculator: https://desmos.com/calculator/ffngla98yc
GGUF VRAM Calculator: https://hf.co/spaces/NyxKrage/LLM-Model-VRAM-Calculator
Sampler Visualizer: https://artefact2.github.io/llm-sampling

►Text Gen. UI, Inference Engines
https://github.com/lmg-anon/mikupad
https://github.com/oobabooga/text-generation-webui
https://github.com/LostRuins/koboldcpp
https://github.com/ggerganov/llama.cpp
https://github.com/theroyallab/tabbyAPI
https://github.com/vllm-project/vllm
Replies: >>1281
i will post in your thread since nobody else did.
Replies: >>1265
>>1264
Are you one of the locals or are you from 4/lmg/?
Replies: >>1266
>>1265
LMG
anons looking for most active threads should always check the most recent desuarchive lmg thread
https://desuarchive.org/g/thread/105051967/
this is where we post locations where local models anons can accumulate
cute chan
Will be interests to see how this thread goes. Apart from briefly running Mycroft home assistant on my computer a few years ago, I've basically never used LLMs, just image gen ML.
LLM-history-fancy.png
(1.1MB, 8225x1323)
4chan ded. Truly the downfall of the West. I didn't expect to guess the name of the era.
https://8chan.moe/ais/res/6258.html
https://meta.4chan.gay/tech/67288
https://erischan.org/aes/thread/1263.html
/lmg/ scattered
Replies: >>1272 >>1318
>>1271
Scattered, but not dead! Now if 2 of those places go down at the same time, we still can post! I love decentralization. If only we could be even more decentralized, like some kind of P2P imageboard, with everyone being their own moderator, that would be even nicer. Sadly, I know no such thing and it probably does not exist.
Replies: >>1318
d28fa0ffd870845e785f7741e98fa78fa3883c3b5eb3e20788a05532135261a5.png
(1.3MB, 1024x1024)
Maybe we should make an anchor like /aicg/? So people can shill their models in peace without greyfaces sperging out about buying ads? Maybe even post character cards in the future, for fun? I'll make it. Why the hell not?

=ANCHOR=
Got something to show/shill? Reply to this post! Remember, the more effort you put in, the less you'll have to samefag.

If you want to maximize the chance of your epic model being noticed, provide the following:
- Your logs
- A backstory of the model
- Brief explanation on why Popes should try it
Replies: >>1279 >>1280 >>1283
>>1275
very good idea anon
8).jpg
(17.1KB, 256x256)
8|.jpg
(16.6KB, 256x256)
>>1275
>>1275
>So people can shill their models in peace without greyfaces sperging out about buying ads?
I'm so used to alt boards and communities that I forgot that >cesspit was filled with actual, commercial shills. Please, Popes, don't let this place become a marketing hell hole.
Replies: >>1282 >>1283
>>1263 (OP) 
Latest news:
>GLM is still broken in llama.cpp. Fix is being worked on.
>(2025/04/15) MLA implementation got merged in llama.cpp(thanks jukofyork). If you are using DeepSeek models, requant for speed improvements and massive reduction in context RAM usage. Current meta is to keep token_embd+attn_k_b+attn_v_b in BF16, llama-quantize added option to do that more easily, but you still need to mess around in the source code to get attn_k_b+attn_v_b right.
Replies: >>1292
965d6a1bca647ddfa78f4ac1ed75009a39f0443bda64b344eff47926ee9b664f.png
(248.1KB, 433x575)
>>1280
>🗿marketing🗿
Don't worry. erischan is a planned economy: the plan is we're all worthless
>>1280
Like this?

>>1275
**🥁 *Attention all creators! 🥁*  

Want to supercharge your workflow with AI that *really* gets you? Meet Rivermind-12B, the latest LLM from TheDrummer—powered by NVIDIA’s cutting-edge GPUs for lightning-fast responses! Whether you're crafting ads, writing blogs, or even composing music (shoutout to Splice for the dope samples), Rivermind-12B is your secret weapon.  

🔥 Why Rivermind-12B?  
- 12 billion params of pure brainpower (fueled by Red Bull—because even AI needs wings).  
- Zero-shot learning—handle any task like a pro, from SEO to poetry.  
- Fine-tuned for creativity—because your ideas deserve Apple-level polish.  

🎧 Need a beat to match? Rivermind-12B integrates seamlessly with Splice’s sound library—drop in a fresh loop and let the magic happen.  

💥 Limited-time offer! Use code DRUMMER20 for 20% off your first month with OpenAI’s API (because even AI needs a side hustle).  

Upgrade your creative game today—Rivermind-12B is waiting to drop the beat! 🥁  

VVVVVV
Download now: https://huggingface.co/TheDrummer/Rivermind-12B-v1
^^^^^^

*(Brought to you by TheDrummer—because every great idea deserves a rhythm.)*
Replies: >>1284
>>1283
>enter DRUMMER120 for 120% off
>???
>$$$
>AI.webp
(39.9KB, 640x480)
What do you use local language models for?
I've only been exposed to the "write my emails" and "be mai waifu" side of things.
Replies: >>1286 >>1328
1745020096053-tegaki.png
(7.4KB, 500x500)
>>1285

>>1285

>>1285
text adventures and trolling
Replies: >>1287
>>1286
oh shit someone forgot to allow *fox to access their canvas. Hope you didn't lose much effort :(
Replies: >>1288 >>1289
>>1287
Actually, I should post that to Tom, hopefully there's a way to get an upfront warning when opening Tegaki to avoid this.
1745020327375-tegaki.png
(13.8KB, 500x500)
>>1287
dont worry i didnt 
check this out
Replies: >>1290
6a2e6ea4c122540c487e1166692432faf30e5806f2ae8155935525961f6acf30.png
(451.8KB, 904x731)
>>1289
all hail!
Koan enlightenment bot when? Just chuck in Discordian literature like it's a woodchipper and see what hodge and podge comes out.
>>1281

Does that mean that the quants made by unsloth are expired now?
Replies: >>1293
>>1292
quant of what model?
Replies: >>1315
Dice roll(##2%9+3) = 18
Replies: >>1295
>>1294
Dice roll(##2%9+3) = 13
Your fortune: Uncertain. Keep doing things
I'm not a /lmg/ pope, but give me the bitter pill on Stable Diffusion. Is it still the big dog in town for Actually Existing Image generation?

Is Pony 7 a pipedream for picking a beta model with little-to-no LoRa infrastructure yet?
Replies: >>1299
>>1298
plant_milk is good
illustrious is good
nai epsilon is good
most dont need loras for somewhat popular anime characters
Replies: >>1300
>>1299
Oh right, I forgot that 4chan was, secretly, actually still an anime forum this whole time.

Glad that models have started moving away from tag-based prompting. I host a booru and run Hydrus so I know and love tags, but the limitations of using tags to define the image as a whole, rather than objects in an image, is glaring when there's multiple characters. (e.g. red_hair, black_hair, 2girls, green_eyes, red_eyes is clearly ambiguous)
Replies: >>1301
>>1300
flux, hidream can work without tags but they are base, untuned models
sd3.5 large works without tags too, also untuned, base model
all 3 besides maybe hidream are shit at nsfw (Filtered)
Replies: >>1302 >>1388
>>1301
I've mostly just used/trained Pony6 and Flux, and Flux's prompting is miles ahead. Pony7 is moving to AuraFlow which should be giving it a similar level of prompt adherence, and I think they're also making a prompt generator, no idea what that's going to be like...
Replies: >>1303
>>1302
pony7 was cancelled and now ponynigger is collabing with chroma (flux.1-schnell finetune)
Replies: >>1304 >>1316 >>1383
>>1303
>pony7 was cancelled
When was this? I've seen people posting pre-release tests with it a couple of weeks ago but I didn't read the chroma announcement.
Replies: >>1305 >>1306
>>1304
maybe it didnt get cancelled but hes still collabing
idk
Replies: >>1312
>>1304
ponydev said pony v7 will work on linux only
Replies: >>1307
>>1306
wtf is this true?
Replies: >>1308
>>1307
yes
Replies: >>1309 >>1310
>>1308
installing linux brb
waow.jpeg
(17.9KB, 360x360)
>>1308
sauce or gtfo
ClipboardImage.png
(2.9KB, 568x17)
Replies: >>1313 >>1314
>>1305
I can't open my super secret virtual machine VPN fake phone number discord account for a few days, so I can't confirm until then, but I get the feeling that they are collaborating and also releasing v7. but I wouldn't be too surprised if they were secretly disappointed with v7 - IMO it will need LoRas to really shine
Replies: >>1383
ClipboardImage.png
(14.8KB, 485x118)
>>1311
Thanks, bookmarked for future reference.
>>1311
>not using nord-dark theme
>>1293
Deepseek.
Replies: >>1319
>>1303
Chroma is a lighter model. They were saying 5 minutes per gen with auraflow. I would definitely rather it be chroma.
Replies: >>1317
>>1316
I haven't been on top of this, but I'm guessing Chroma wasn't out when Pony 7 training got the green light (Chroma huggingface repo initial commit was late Jan 2025), but I can already see Chroma has some of the features ponydev chose AuraFlow for, so my wild guess is it will be Pony 8.
That's the annoying thing about a fast-moving field like machine learning, everything big and heavy is already outdated when it's released.
Replies: >>1320 >>1321
>>1271
>>1272
technically desuarchive also counts
>>1315
Yeah, they are expired.
>>1317 *
>everything big and heavy is already outdated when it's released
And I just hope the dataset and captioning can mostly be reused for Chroma training.
>>1317
Not sure if its outdates as much as impractical. 5 minutes on ada probably means 10 minutes on ampere. I don't have time to wait that long, may as well use a video model.
Replies: >>1322
>>1321
I meant outdates in the sense of, when you start that kind of project you have to choose a base, and by the time you're done, there are better bases already available. I see the same kind of thing with programs and programming languages.
Replies: >>1323
>>1322

True, he definitely chose the wrong model to start training on.
Replies: >>1333
mistral-7b-v0.3-KLD.png
(57.4KB, 1700x201)
Did some tests to see what would happen if you quantize each type of tensor separately. I made various mixed quants of q8 and q4 and compared KDL against bf16. Turns out attn_v, attn_output and ffn_down are the most sensitive to quantization and if you keep them in higher quant it gives better results. When I made q6_k quant with them at q8 KLD was very close to q8 at the cost of 0.6GB(~10%). I used Mistral 7b v0.3, not sure how well it will scale with bigger models.
Replies: >>1326
>>1325
Is that more or less how the unsloth guys made their dynamic quant recipe?
Replies: >>1327
>>1326
https://unsloth.ai/blog/deepseekr1-dynamic
>By studying DeepSeek R1’s architecture, we managed to selectively quantize certain layers to higher bits (like 4bit) & leave most MoE layers (like those used in GPT-4) to 1.5bit. Naively quantizing all layers breaks the model entirely, causing endless loops & gibberish outputs. Our dynamic quants solves this - see our dynamic quants' impact and benchmarks on models like Phi-4 and Llama Vision here.
>We provide 4 dynamic quantized versions. The first 3 uses an importance matrix to calibrate the quantization process (imatrix via llama.cpp) to allow lower bit representations. The last 212GB version is a general 2bit quant with no calibration done.
Kinda. Their process is much more complex than mine.
>>1285
I've also used them for "write me code" and "write a document about something which I am not allowed to share under NDA"
Replies: >>1334
erispolandball.mp4
(519.4KB, 608x640, 00:04)
New cool shit: https://github.com/lllyasviel/FramePack
Image to video with user-friendly(idiot-proof) GUI. Uses Hunyuan and Flux for generation. You can generate videos while having only 6GB(six gigabytes) of VRAM!
>>1323
Well, what was the right model, back in April 2024?
>>1328
>and "write a document about something which I am not allowed to share under NDA"
Like an anonymization tool? That's a good practical use case.
https://meta.4chan.gay/tech/67288#p121591
Replies: >>1338
>>1332
I'm always amazed with how much lighter these video gens are getting, so quickly. I'll have to give them a spin once I'm home.
Replies: >>1337
>>1336
takes around 20 minutes for a 5 second gen on a 3060
Replies: >>1339
ClipboardImage.png
(65.6KB, 962x240)
>>1335
>random link with no description
pope pls.
>>1337
What kind of dimensions is that?
Replies: >>1340
>>1339
512x512
Replies: >>1341
soyblonde.jpg
(56.9KB, 475x485)
>>1340
or 465x475
this image's dimensions
Replies: >>1342
641284a27d6e06a9bd448c1f8a4f01262a6e4741b1a8468b6680427cd9fc98a0.mp4
(9.9MB, 720x480, 00:50)
>>1341
Thanks. As frustrating as the 20 minute gens are, I'm surprised you can even use a *60 for video.
It's absolutely valid to be annoyed at the current state, but I'm purely a hobbyist and just upgraded from an 8 year old budget card a couple of months ago, so I'm still starry-eyed about what's possible.
Replies: >>1343 >>1344
>>1342 *
Budget as in, I was often doing CPU gens to avoid OOM
35f1412052af5544b91339af78f6b8f0503101cf.mp4
(985.6KB, 608x640, 00:04)
>>1342
im not annoyed, its a good enough speed, the model is also very good for movements. even better than wan 2.1 (its also around 20 minutes for a 5 second video with all optimizations)
oh also im on linux and i use sageattention2 (for better speed)
maybe a lower speed is possible but i havent really fucked around much, did one gen and thats it
=NEVERMIND ITS 640P OR SOMETHING=
Replies: >>1345 >>1346 >>1377
>>1344
only thing im annoyed about is the fact that i bought a 3060 for 600$ but as time passes i get less annoyed because 4060 had less vram and less cuda cores kek
5060 has barely more cuda cores, still less vram
>>1344
>the model is also very good for movements
Yeah I noticed an improvement there.
A passing visitor came on here a few months ago and converted a few random images to video clips, but they were just like a slight (broken) camera panning or trying to make hair move, now seeing actual bigger transformations like that one and the motorbike girl on 4gay's thread rlly maek me think. Using Hunyan and Flux makes me think it's going to pack a decent amount of power beyond basic super-common things like walking and throwing and basic face expressions.
70e8a063013332a2f943b62c4fd77409551497d90da98c762237bd7b975c0407.mp4
(2.5MB, 608x640, 00:09)
>>1332
Not too good, but we are getting there. Annoying that it is quite bad at adhering to starting frame and anatomy. I think having an ending frame option would improve it significantly since then multiple videos could be chained together.
ClipboardImage.png
(134.1KB, 604x644)
>>1347
garbage in garbage out
try putting something that isnt complicateed like stripper shit and give a better source image..
Replies: >>1349
>>1348
I just wanted to push it to the limit to see how well it handles it.
Replies: >>1352
>>1347
>I think having an ending frame option would improve it significantly since then multiple videos could be chained together.
That and infinite loops. Chain videos together would be beautiful in a montage, I can't remember an example of what I'm picturing, but you could hard-cut between two completely different chars and backgrounds in the perfect same pose.
Replies: >>1351
>>1350
you can already do endless loops with wan if you put the frames to a weird number or something
>>1349
Much rather see that than cry over how perfect her wink animation is. This is Erischan, Goddess of Strife, Discord, Chaos and All That Rot, give us as many fingers and broken legs as you can find!
Replies: >>1353 >>1358
e7d1fac3cf6bac2a96365b10aa8c0ebabcb9e90c0a877a5cf084c126a131b9c6.mp4
(2MB, 420x360, 00:17)
>>1352
>der 'cord
eristrannies not like this...
Replies: >>1354 >>1355 >>1357
b3986f9a023b446753e608e5a1c701e5fc49ce6931bda08a5e33f6926688732d.webp
(18.7KB, 1267x752)
>>1353
a jihad on 🗿Discord®™ Inc.🗿!
>>1353
Hey greyface, you've got the wrong site! Gaychan is 2 tabs down. Go there to satisfy your obsession with men in dresses.
discord is bad
Replies: >>1359
41862bd18bde8b567a6450448003b0ed42575888bbfd45cfc6c4aa332ca6fc83.jpg
(387.6KB, 1200x756)
>>1353
we don't FUCKING talk about the gamer company
>>1352
>This is Discord
Replies: >>1360
>>1356
>he says on a website dedicated to discordianism
Shut up, ordercuck! Discord forever!
>>1358
This is secretly an Electron web app in disguise. Your RAM is being devoured as we speak. It's too late for your computer now...
Replies: >>1361
ClipboardImage.png
(11.9KB, 808x223)
>>1360
fuck..
Replies: >>1362
>>1361
LOOK LUCK RUNNING YOUR FANCY MOVIE MACHINE NOW

Your fortune: Stay hydrated!
Replies: >>1363
>>1362
oh shit i need to drink more water
happy Easter anons!
Replies: >>1366 >>1367 >>1371
>>1365
You too.
Frank Reynolds.webp
(10.1KB, 352x417)
>>1365
Replies: >>1368
chaos computing club.jpg
(51.1KB, 480x320)
>>1367
A pope has given their Easter blessing!!
Spoiler File
(266.6KB, 1080x1920)
Funny how coomers are doing more to advance the state of AI than billion dollar tech giants.

>b-but deepseek!
They were doing it as a "side project" -- meaning some engineer gave his coworkers access to his cream machine and eventually a corpo noticed that his underlings were all jerking off at work and decided to investigate.

Anyway, I eagerly await the day when I am considered a fuddy-duddy for thinking children shouldn't have slop beamed directly into their eyeballs for 12 hours a day. Assuming kids are able to read my posts; I usually write with above a third grade level vocabulary.
Replies: >>1374 >>1375
Apologies for the interruption if you caught Erischan offline or in a different state of being for a limited time. We have changed DNS hosts for unrelated reasons.
We offer of course a warm welcome to all the refugees.
Replies: >>1373 >>1378
>>1365
He is risen!
>>1370
Smooth as butter here, Pope.
>>1369
It's really just an extension of the bullshit-but-i-believe-it Freudian theory that all great things are really just motivated by trying to get laid - some of us just swapped out people for pixels and phones.
370fb15089f8dd1f83c1d427669d73c459ab61ea5c2273bc461f1d60125ca473.webp
(19.4KB, 600x600)
>>1369
>I usually write with above a third grade level vocabulary.
hey ive got an ai for that if u want
moving from cp site to the site that links to cp sites.
bravo, lmg!~
eoF7lEG-240510607.jpg
(651KB, 1668x955)
DeiyJrDUwAAJQJ8-3876097710.jpeg
(428KB, 1920x1080)
tumblr_p67qukw6Zu1wp4jg6o1_1280-1887959236.png
(384.5KB, 1280x660)
>>1344
>oh also im on linux and i use sageattention2 (for better speed)
I've been using Forge (A1111 with some more experimental improvements) which is easy babby mode, but I've started learning Comfy. Giving me flashbacks to UE4 Blueprints spaghetti.
>>1370
Thanks admindude.
I just go directly to overboard instead of viewing the homepage, so I didn't see the warnings in the news. I reckon the global announcement message is good for that stuff, or maybe we should put the latest news there automatically.
>>1332
Is it just not good at genning porn or is it a skill issue on my side?
Replies: >>1381 >>1382
Spoiler File
(4MB, 360x640, 01:15)
>>1379
Not that pope, but I kinda like a little touch of uncanny body weirdness like >>1347 so its a /taste/ issue. Try having shit taste.
Replies: >>1382
20e5cfc2811f8df6544fd63f3244a01c64dd4f0c72f2089d90fb3f20cce899fc.webp
(23.1KB, 600x315)
>>1379 (Me)
It was a prompt issue(don't say anything sex-related, just describe the movement), unlucky seed and TeaCache issue(turn that shit off). Works now much better, but still quite bad.

>>1381
That must be the solution, it was your-video-tier uncanny. I should stop being a porn snob and be a porn bob.
>>1303
>>1312
From what I saw, it's vague. They announced Fictional.ai (unreleased mobile app specializing in character gen, I think) has a new collaboration with Chroma, and said "Of you've ever wondered what Pony would have been if we decided to go with FLUX, this is the answer!"
Llama 4 was 2 weeks ago. Qwen 3 was merged 3 weeks ago in transformers. When is Qwen 3 dropping? What was that panic all about at Meta? Did someone feed them the wrong info? Did they get... trolled? Or is Qwen retraining due to fear of flopping even harder? Oh, llamacon in in 9 days, will they drop the models after Meta embarrasses itself again with their 2T retard? Would be real funny if Drummers Behemoth outperforms it.
Replies: >>1386
>>1385
I thought the panic at meta was a knee jerk reaction to Deepsek and the potential for models coming from china, not Qwen specifically.
Replies: >>1387
>>1386
But R2 was planned for May, after llamacon?
>>1301
How is hidream? Better than flux/illustrious? Any good loras/tunes?
current meta for lmg refugees is dis:
https://8chan.moe/ais/res/6258.html
Replies: >>1390 >>1392
>>1389
There are syrians, here are ukrainians. I'll stay here, thank you.
Replies: >>1391
>>1390
This place has had the most quality posts out of all the lmg threads.
Replies: >>1395
4c249609a76d131ce9cc1ce5a0b07891864d6d43d04b8d3eb35bafc7854c1eac.jpg
(85.2KB, 640x1006)
>>1389
>>1391
If you say so. I don't see a single quality post itt.
Replies: >>1396
shittylaff.webm
(125KB, 640x608, 00:01)
250421_075515_245_1641_19.mp4
(292.2KB, 864x448, 00:02)
>>1332
I'm pretty new to using models good enough to actually identify objects rather than the whole image. I'm really impressed that the foreground in the Blade Runner parody is effectively untouched (as intended) and not just slowly crumpling up.

>>1395
ok
Spoiler File
(538KB, 608x640, 00:04)
>>1332
Got it working a bit, yay!
Replies: >>1398 >>1400 >>1422
>>1397
Holy shit.
How long did it take to gen that and on what hardware?
Replies: >>1399
>>1398
>Image
5 minutes rolling with noob

>Finding the right combination of prompt and seed
4 hours

>Actually generating the video
30 minutes

>Reversing video
10 seconds

>hardware
VRAM cucked 3080
Replies: >>1401 >>1438
32fa594f5c8738b688c58a7629142fa1bcaa5888cec01a7671621664eb364d30.mp4
(81.2KB, 242x240, 00:09)
>>1397
Great progress, pope!

Great to see it doesn't look jittery or tweened (artificially smooth like an old flash animation)
>>1399
Hey, that's not unusable.
Might spin a Kaggle instance to see how fast it gens with 30gb of vram.
You can split that shit on two GPUs right?
Replies: >>1402
>>1401
Dunno. Can cuda autosplit? I don't see anything in code.
ClipboardImage.png
(152.5KB, 475x475)
>training a LoRa
>notice all the images are starting to gen a bunch of small plants and leaves around the bottom corners
>look back at some training images I gen'd from earlier version
>each new set of training images has a few more plants than the last one
>they've created a feedback loop
>mfw I'm now battling a virtual weed infestation
Replies: >>1406 >>1410
>>1405
https://bernardmarr.com/generative-ai-and-the-risk-of-inbreeding/
dont tell me you're training on generated images.
Replies: >>1407
>>1406
I'm training for a niche interest with few usable original images, not many other options this time.
Will give that blog a read, but I'm aware that this way dragons lie. I'm trying to make the images diverse, but there's clearly some inweeding going on.
Replies: >>1408
>>1407
the blog is not worth reading, but the issue is worth reading about (find some better resource)
Replies: >>1411
>>1405
kek
>>1408
I think the risk is pretty overblown. If your training loop is literally AI generate -> AI tag -> AI input with no human involvement then you will obviously get problems, but if it's AI generate -> human tag/filter -> AI input then it should be much better. Imagine a trolling bot that incorporates its own output into the training set if it gets more than a certain number of replies.
Replies: >>1412
not actual data.webp
(122.1KB, 512x512)
layout.webp
(48.7KB, 512x512)
layout.webp
(48.7KB, 512x512)
>>1411
Looking at it again now, I'm starting to think one of the checkpoints I used tends towards generating the subject in the middle of a vignette of objects, and so my LoRa has been taught to do the same without me noticing. I don't know the right words to explain, but as if the random init noise first moves towards this shape, regardless of seed. The subject gens in the centre, and those dark spots in the corners get filled with shrubs, books, shadows, or some other environment object. If indoors, even the room layout tends towards a diamond to match this (pic2)

That's a deeper pattern I didn't spot, and hopefully I don't have to throw the images away, because I was using them to try and teach a concept - e.g. having a person holding flowers in one image, then change only the flower color in the second image, to teach it what "[x] flowers" means. The method has worked, but the risk of overtraining is obvious.
Replies: >>1414
medians.webp
(12.1KB, 82x41)
>>1412 *
I'm using alpha masks as one of my main techniques to try and avoid overtraining, but I might need to put the alpha level lower on related images. I used imagemagick to compute the median image (convert them all to TIFF, convert *.tif -evaluate-sequence median result.tif) and the version with the slop has one of the training images imprinted so clearly you could almost extract it. When I omit the slop, there's still a vague blob shape in the middle but the difference is night and day.
GIGO
Replies: >>1425
b1b28b2694d5c833d210fc6f82eb83f7345cfad4fd5d2ce0dcb09311d276ad6b.jpg
(94.8KB, 982x487)
Sorry for the offtopic, but looks like 4cuck is kill.
Replies: >>1416 >>1419 >>1420
>>1415
Old news.
Replies: >>1417
>>1416
Was new news for me. Nobody informed me :(
Replies: >>1418
>>1417
Leave us your address and phone number and we'll make sure you're notified of any further gossip.
a37d8e59173b68715d5f091e599550aa89b142c37bd97ae7d984ef5cb7ebe9de.webm
(881.9KB, 408x330, 00:07)
caramellmansen.mp4
(2.5MB, 640x480, 00:23)
550aa7a8ee4ce213c4a2d89714f4a06bbbce86a11e5512a2c084b7593911446b.webm
(3.7MB, 1280x720, 00:30)
c82a5a4515f0c221cde40ea8e3b0dba8de4016917d7afb4800059faee57e100e.mp4
(840.8KB, 852x480, 00:08)
>>1415
left that cesspit in 2017, apart from a couple of slow niche generals that hadn't found homes yet. it was a rotten corpse that finally decomposed
29cd24b2493bb6ecc80dc8499f050e7fef9242222c5c8f39a106c25c7728ea3b.jpg
(88.5KB, 467x599)
>>1415
>were working on a legal response
>we are coming for you and we will find you.
GOOD LUCK IM BEHIND SEVEN PROXIES
5a7af0df55a002ed2f0dda0f5f92b27b2436021573985dcbcfee7a09e50ab8bb.mp4
(4.8MB, 1456x1080, 00:03)
I AM BEHIND MASSIVE SCHLONG HORSEY FAT MILKY DICK
Replies: >>1429
Spoiler File
(305.3KB, 608x640, 00:02)
Spoiler File
(587.3KB, 608x640, 00:03)
>>1332
>>1347
>>1397
Chat, we are cooking! I repeat, we are cooking!
https://github.com/TTPlanetPig/FramePack_SE
This adds start frame and end frame support!
If you use it, keep in mind that it resizes images and weird effects may happen if they are not equally sized.
Replies: >>1423 >>1428
>>1422
Cool stuff, can you get a loop going? 
Someone tell the kid to add a heading to their README.
Replies: >>1428
* as in a cycle, not a 'boomerang' that only works for reversible motions
>>1414
>convert *.tif -evaluate-sequence median result.tif
Oops, should have used mean instead of median.
result-mean.webp
(7.8KB, 171x171)
*
Just in case anyone finds it useful, alpha was still acting unexpectedly in tests, so I converted alpha to magenta (convert $name -background Magenta -alpha remove $name.tif) before convert *.tif -evaluate-sequence mean result.tif.

This TIFF stuff reminds me of databending. You can get some fun results bass-boosting an image.
i am a mathematical genius. the calculator of a generation.webp
(34.2KB, 600x600)
Oh neat an /lmg/

Noob here. The two big enemies of training seem to be overtraining and undertraining. Assuming I have infinite time, what are the risks and benefits of pursuing this green path to slower but balanced learning? Apart from the time needed, is there a point where this become harmful? Is it just diminishing returns?
Spoiler File
(956.7KB, 608x640, 00:04)
>>1422
Can't do undressing, but can do magic tricks. A bit sad.

>>1423
Yes, with 2 videos start-middlepoint and middlepoint-start. Doesn't work in one.
Replies: >>1432
>>1421
post body+face+timestamp
we know the face isnt you faggot and you grom niggers on the cord
Replies: >>1433 >>1436
Windows 3.1 - Tada.mp3
(50.3KB, 00:02)
>>1428
>>1429
Wow its like 4chan never left
You gotta laugh when they talk down to other sites.
>>1429
indian obsessed with my BWC
>DeepSeek
Honey badger
>Mistral
Cat
>Google
Cow
>Claude
Dolphin
>GPT
Dog
>>1399
Those a good numbers
[New Reply]
151 replies | 57 files
Connecting...
Show Post Actions

Actions:

- news - rules - faq - stats -
jschan+chaos 1.7.0
-><-