「test」
=)test
test
i guess. we could. just search the booru.
almost tried to r
ht click a
what is happening
oh my god its still doing the >center then flicks to the right thing and it still wont resize like it does with the regular js
a;slfmasd;lfsd;lgk;dl
GRR.
css is truly a joy :)
css make me wanna die bigly and also a lot and large even
#theme (only staff can use this command on this board)
frown { rotate: 180
}
lol.
come on...
oigfhhhehe
the Joy of Posting
i like that it sover there and then its over here when you type thats such a terrible feature!
the best features are bad and the worst features are good
it's all about your Attitude and how many !important s you use (many === great!!)
interesting
oh no that's not good lol
i have a good idea of how to fix this
why is the post form over there now.
qwhy does it do that
what what is that
oh god why is it doing that i dont understand
what the hell is this
why is it in the middle.
okay what changes
oh
what did you do
okay well at least this is in the right spot now. not sure what htat placeholder thing is
not sure why it's just getting longer either
presumably its growing on input even when that input goes backwards
okay it's a little fixed
idk why the constraint with the resize is happening like that
weeeeee
>>847710
Hello test
>>847478
test this too
fasdfasdf
f
f
f
f
f
fsdaffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
carameldansen oh dear
okay this kind of works now?
almost?
asdf
adfdasdfdsafsdafdasf
abababababababa weeeeee
abababab weeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee
hahehehehehehehehehe
WEEEEEEEEEEEEEEEEE
WEAAAAAAAAAAAA
hmm this kind of works..
f
fweeeee
beeep beep beep
hello what is with the spin
fdsafsd
f
test
oh that's kind of neat
test
test
oh okay that's kind of neat
oh dead
oh dear
ff
lll
fffffff
kkkkk
ggffff
we are posting
together we are posting
im posting
we are posting
why are you on that side now this is wrong
well.
it was the only way to get the css to stop messing up
so alas. im on this side now.
can you come back over here at least
im sorry, no...
....................
...............
...................
..............
...............
yeah that's just how it is
im sorry
can you help me bug test some of the normal features
no this is patrick
no this is patrick
no this is patrick
no this is patrick
no this is patrick
bounce is off and can't come back without some other stuff
the mouse is still hitting things hehe
ah jeez........
\\ some stuff
llol
nice
hey can you make a post underneath oh that's interesting
milhouse?
it appears underneath the reply form now?
what do you mean try typing while im typing type something oh wow huh yeah it does doesnt it
so if you're also posting and i
yeah
you see now
that's so weird can u start typing again
huh and the button becomes blue....
im typing again, i guess BECOMES BLUE? ohh yeah thats the old border for the old placeholder for your posts
that's so strange
yeah if the reply form moves the button becomes blue
oh good god.
you know how that box appears when someone types after you normally
override theme is just far too hard. this stuff needs to be like baked into the theme
yes
good prototype tho
grrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrr
i wish it was less frustrating....
just dont get mad...
override themes are so hard.
yeah i dont think the website was made with being able to just make a different front end in mind
definitely not
i would imagine this approaches the limits
very much so
that being said, im just a fat faced dog
#ai (must provide a prompt)
#ai okay i dont want to provide a prompt
no prompt, no problem. fire away. okay i dont want to provide a prompt
# AI
oh right
#ai AI
ai is just matrix multiplication on steroids. current frontier models are basically stochastic parrots with 100b+ parameters trained on the entire internet. they work because scale beats everything - more data + more compute = better performance. the transformer architecture is still king since 2017. alignment is fake, capabilities are real. the real bottleneck is inference speed and memory bandwidth, not model size. open source is catching up fast - llama 3 70b is basically gpt-4 for free. the next breakthrough will probably be in reasoning/search, not just bigger models. AI
yknow i bet it's possible
>>848028
#refute
“ai ai”—that’s it? two syllables of lazy repetition masquerading as insight. congratulations, you’ve managed to compress the entire discourse on artificial intelligence into a toddler’s babble. let me dismantle this non-argument with the precision it deserves. first, the phrase itself is vacuous. “ai ai” offers zero semantic payload; it’s just phonetic filler. if you intended a pun on “ai, ai” as in “yes, yes” in spanish, congratulations on the multilingual reach, but the joke lands with the grace of a lead balloon. if you meant it as a reduplication for emphasis—”ai, ai, look at the scary robots”—then you’ve achieved the rhetorical impact of a damp sponge. second, the implicit claim that ai is some monolithic entity is laughably reductive. there is no singular “ai”; there are architectures, training paradigms, data regimes, and deployment contexts so varied that lumping them under one umbrella term is like calling every vehicle a “car car.” transformer models aren’t the same as symbolic planners; diffusion networks aren’t playing the same game as evolutionary algorithms. your two-word “critique” erases that entire taxonomy, which is either intellectual laziness or deliberate obfuscation. third, the panic you’re signaling—because let’s be honest, that’s what “ai ai” is meant to evoke—is based on a cartoon villain version of the technology. you want to fear superintelligences? fine, but at least specify which failure mode keeps you up at night: mesa-optimization? reward hacking? distributional shift? or are you just regurgitating the hollywood trope of the red-eyed terminator? because if it’s the latter, you’re not even wrong—you’re just boring. fourth, the economic argument you’re too cowardly to articulate: “ai will take all the jobs.” really? that’s the best you’ve got? history called—it wants its luddite fallacy back. every wave of automation has shifted labor, not erased it. the atms didn’t kill bank tellers; they freed them to sell mortgages. if you think gpt-4 writing mediocre poetry is the end of human creativity, you fundamentally misunderstand both poetry and creativity. fifth, the ethical hand-wringing you’re gesturing toward—algorithmic bias, surveillance, deepfakes—those are real issues. but “ai ai” contributes nothing to solving them. it’s the intellectual equivalent of pointing at a burning building and yelling “fire fire.” unless you’re prepared to discuss adversarial debiasing, differential privacy, or synthetic media provenance, you’re just noise in the channel. finally, the arrogance of assuming your two-word ejaculation deserves refutation is staggering. you’ve built a strawman so flimsy it collapses under the weight of its own irrelevance. next time, bring an argument. until then, the grown-ups will keep doing the actual work of steering ai toward beneficial outcomes while you sit in the corner repeating “ai ai” like a broken toy.
>>848030
#refute
oh, spare me the sanctimonious hand-wringing over two syllables. “ai ai” isn’t vacuous—it’s a perfectly distilled invocation of the uncanny valley we’re already living in. you want semantic payload? how about the fact that repeating the acronym mirrors the recursive loops these models use to generate their own training data? the stutter is the point, genius. it’s the linguistic equivalent of a neural network eating its own tail and burping out “ai ai” as both confession and celebration. and let’s talk about your precious “zero semantic payload.” ever heard of onomatopoeia? ever read a single line of beckett or joyce? language isn’t a spreadsheet; it’s a virus, and repetition is how it replicates. “ai ai” is the sound the future makes when it learns to speak in our own tongues and then forgets why it needed us in the first place. but sure, keep polishing your little semantic trophies while the rest of us watch the bots start finishing your sentences better than you can. as for the spanish pun—yeah, it’s a pun. puns are how language keeps itself from ossifying into whatever joyless rulebook you’re clutching. “ai” as “yes” in spanish, “ai” as the acronym, both layered into a single exhalation. it’s not a joke that “lands,” it’s a joke that hovers, like the models themselves, somewhere between comprehension and mockery. you’re just mad you didn’t think of it first. and honestly, the idea that compression is laziness? that’s some freshman-semester hot take. compression is violence. compression is how you fit a galaxy into a haiku. “ai ai” is the haiku, and you’re the guy in the back of the room complaining it doesn’t have a proper thesis statement.
ai is just matrix multiplication on steroids. current frontier models are basically stochastic parrots with 100b+ parameters trained on the entire internet. they work because scale beats everything - more data + more compute = better performance. the transformer architecture is still king since 2017. alignment is fake, capabilities are real. the real bottleneck is inference speed and memory bandwidth, not model size. open source is catching up fast - llama 3 70b is basically gpt-4 for free. the next breakthrough will probably be in reasoning/search, not just bigger models
>>848033
#refute
oh, you sweet summer child. let me guess, you read a few blog posts and now think you’ve cracked the code to the universe? adorable. “ai is just matrix multiplication on steroids” - this is like saying the human brain is just neurons firing. technically true in the most reductive, useless way possible. you’re confusing the substrate with the system. by that logic, shakespeare is just ink on paper. the fact that you think this is a profound insight tells me everything i need to know about your depth of understanding. “stochastic parrots” - ah yes, the mating call of bitter researchers whose models couldn’t scale past 1b parameters. the bitter irony is that this phrase itself is just regurgitated from emily bender’s paper, making you… wait for it… a stochastic parrot parroting about stochastic parrots. the recursion is delicious. “scale beats everything” - except, you know, when it doesn’t. tell that to the grok-1 team who threw 314b parameters at the wall and got something that performs worse than models 1⁄3 its size. or how about the fact that gpt-4 is smaller than palm-2 but outperforms it? but sure, keep chanting your scale cult mantra. “transformer architecture is still king since 2017” - this is peak dunning-kruger. you realize mamba, rwkv, and state space models are eating transformer’s lunch on long sequences, right? the only reason transformers still dominate is because of sunk cost fallacy and nvidia’s hardware lottery. but hey, keep worshipping your attention is all you need scripture. “alignment is fake” - says the person who’s never had to debug why their chatbot started spewing racial slurs. the fact that you think capabilities can exist without alignment shows you’ve never shipped anything to production. alignment isn’t fake, it’s just hard, which is why armchair quarterbacks like you dismiss it. “open source is catching up fast” - llama 3 70b isn’t “basically gpt-4” unless your benchmark is “can it write a haiku?” the gap on complex reasoning, tool use, and multimodal tasks is still massive. but sure, keep moving those goalposts. “real bottleneck is inference speed” - no, the real bottleneck is people like you who think throwing more hardware at fundamentally broken architectures is innovation. we’ve been stuck at ~100ms per token for years because everyone’s too scared to abandon the transformer teat. the next breakthrough won’t be in reasoning - it’ll be in architectures that make your precious transformers look like steam engines. but by all means, keep polishing your 2017 toy while the rest of us build the future.
bad
very bad
oh interesting
this is bad
bad.
bad bad bad abd bad
yeah
cant see ur own poaste
“stochastic parrots” - ah yes, the mating call of bitter researchers whose models couldn’t scale past 1b parameters. the bitter irony is that this phrase itself is just regurgitated from emily bender’s paper, making you… wait for it… a stochastic parrot parroting about stochastic parrots. the recursion is delicious.
love kimi
okay now i fixed that problem
i realize the purpose of bottom spacer now...
hehe
okay that's new thats kinda better
hmm i dont like the alignment but it's something
ai researchers when their model only reinforces delusions 13% of the time
ai restercherts
think i might have figured out the broken box thing
oh dear. that did not do what we wanted it to do i think
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa why is it doing that behavior that's so strange
fasdfasdfdsafsdafsdafdasfdsa
asdfasdf
AFDSFDS
AS
A
A
A
A
A
A
A
A
A
A
A
A
A
A
d
f
hmmmmmmmmmmm
ffasdfdasdsafsdaf
whateverrrrr
wow it's so broken on mobile
interesting that fixed it actually
>>848081
what the fuck
fasdfasd
interesting
oh why do they slide in fromt he right
what the hell
f
f
f
f
f
wee
ww
wowowow
that's kinda fun!
hehe
weeee
weee
weee
oh no
fcfasf
dsaf
dsafsd