Forwarded from [FIRST READ, THEN FORM AN OPINION]
hello bitches!!
i have returned.
hear all the talk about big corpo using ai for surveillance?
i mostly don't believe this. (you should though, if you want to. who am i to tell you what to do anyways? use your own thinking skills)
my main reason is, if they're making surveillance ai, why the fuck are they making llms? surely it's a very expensive and unnecessary front to be keeping for this whole operation. llms are notoriously bad at detecting faces and image processing, so we can't say they use llms for spying on people. that one just ain't it.
and if llms are just a decoy and they're actually making spy ais or something, why did they choose llms as a front specifically, something that is known to be very hard and tedious to train, fine tune and supervise, and then have its harmful behaviors trained out!
if i was building a hut in secret, i wouldn't build a burj khalifa in front of it to hide it, i'd build a 3 story apartment.
besides, if surveillance ai development is so cheap and easy that it takes only a fraction of the resources of, say, openai, whilst the rest goes to llm making, why the fuck even make the llm in the first place? just discreetly pay altman to build you a "magic-surveillance-ai-laboratory" in the basement of the white house.
speaking of which, how much basement does the white house have? what's down there anyways? huh. the less you know.
i have returned.
hear all the talk about big corpo using ai for surveillance?
i mostly don't believe this. (you should though, if you want to. who am i to tell you what to do anyways? use your own thinking skills)
my main reason is, if they're making surveillance ai, why the fuck are they making llms? surely it's a very expensive and unnecessary front to be keeping for this whole operation. llms are notoriously bad at detecting faces and image processing, so we can't say they use llms for spying on people. that one just ain't it.
and if llms are just a decoy and they're actually making spy ais or something, why did they choose llms as a front specifically, something that is known to be very hard and tedious to train, fine tune and supervise, and then have its harmful behaviors trained out!
if i was building a hut in secret, i wouldn't build a burj khalifa in front of it to hide it, i'd build a 3 story apartment.
besides, if surveillance ai development is so cheap and easy that it takes only a fraction of the resources of, say, openai, whilst the rest goes to llm making, why the fuck even make the llm in the first place? just discreetly pay altman to build you a "magic-surveillance-ai-laboratory" in the basement of the white house.
speaking of which, how much basement does the white house have? what's down there anyways? huh. the less you know.
