Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful youāll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cutānāpaste it into its own post ā thereās no quota for posting and the bar really isnāt that high.
The post Xitter web has spawned soo many āesotericā right wing freaks, but thereās no appropriate sneer-space for them. Iām talking redscare-ish, reality challenged āculture criticsā who write about everything but understand nothing. Iām talking about reply-guys who make the same 6 tweets about the same 3 subjects. Theyāre inescapable at this point, yet I donāt see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldnāt be surgeons because they didnāt believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I canāt escape them, I would love to sneer at them.
(Semi-obligatory thanks to @dgerard for starting this - this one was a bit late, I got distracted)
My organic chemistry professor used ChatGPT to write a lab procedure. My other chemistry professorās daughter is VP of AI at Microsoft. AAAAA
Two posts in two weeks about professors using ChatGPT has me questioning my desire to go back to school
Colleges have always had their share of awful professors. Same as it ever was, really.
Ah I recall a CS prof who had his computer world writeable shared on the network who used to smoke cigars in the no smoking allowed building. When confronted with his computer being world writable he first denied it, so much so that students how to give him a live demonstration of them changing files on his workstation. That was amusing.
do you have it? iām in this field and i wonder how badly it fucked up
alright iām gonna start by noting that this synthesis - the real one - is not particularly hard or dangerous, and if you know what are you doing it can be even semi-safely done in shed. that said, itās also not exactly normal compound and in few aspects itās a corner case. importantly for spicy autocomplete, this procedure is well disseminated over internet
whatever chatgpt cooked there is not workable at all, because itās three unrelated procedures mashed together in a way that makes little sense. i think iāve been able to pick up where did some weird, disastrous numbers came from. hereās some random synthesis https://web.fscj.edu/milczanowski/eleven/luminol.pdf it sounds like everyone copies this procedure from everyone else with small changes if any
overall synthesis of luminol goes like this: phthalic anhydride (or acid) gets nitrated to 3-nitrophthalic acid (+ 4-nitrophthalic acid), this step has been omitted and i donāt know why, because this is nice if a little bit dangerous synthesis for first organic lab ever (after basics of purification methods are covered). it requires some care, but more importantly requires understanding of crystallization because thatās how these two isomers are separated. itās not particularly hard either. anyway, itās not shown there. that would be step zero. step one is synthesis of hydrazide from acid, at rather high temperature (200C) with glycol-type solvent (triethylene glycol or straight up polyethylene glycol). itās variant of this method https://repository.ukim.mk/bitstream/20.500.12188/11328/1/XIII_0540.pdf itās rarely discussed, because not many compounds of interest survive these conditions. here, however, it works fine because of combination of factors, one of them being unusual nucleophilicity of hydrazine, the other being additional driving force coming from the fact that newly formed ring is aromatic (which is covered in probable training material, but chatgpt canāt infer so it doesnāt matter)
then second step is reduction of nitro group to amine. there are many ways to do this, but if i was to choose, iād pick some modern method like catalytic hydrogenation on Pd/C, or transfer hydrogenation with formic acid. what can be found out there uses either tin(II) salts or sodium dithionite, or some method using hydrogen sulfide. then again some purification is necessary, of which good choice would be crystallization. unusually hydrochloride salt of luminol has low solubility in water, and this gets useful during workup.
chatgpt instead cooked something that wonāt work under any conditions. like i said previously, first step would be synthesis of phthalohydrazide directly from acid, but spicy autocomplete had little info about this type of synthesis in probable training material so it doesnāt come up. what was, however, was actual synthesis of luminol and variety of amide synthesis methods. these syntheses use some kind of activated acyl group equivalents, starting with anhydrides, acyl halides, isourea derivatives etc the important bit is anhydrides. anhydrides do form amides, and cyclic anhydrides can be formed on heating. spicy autocomplete conflated heating in synthesis of phthalhydrazide with making 3-nitrophthalic anhydride - step that does not exist in real synthesis
so letās start with good synthesis, that looks for example like this:
Combine 0.300 g of 3-nitrophthalic acid and 0.4 mL of 10% aqueous hydrazine in a side arm test tube. Heat the test tube over a microburner until the solid dissolves. Add 0.8 mL of triethylene glycol to the test tube. Clamp the test tube to a ring stand, insert a thermometer into the test tube (use a two holed rubber stopper, one hole for the thermometer and the other to make certain that the system is not sealed), and connect the side arm to a vacuum source. Heat the solution to 200 Ā°C and keep the solution at 210ā220 Ā°C for two minutes. Allow the test tube to cool to 100 Ā°C and add 4 mL of boiling water to the test tube. Cool the suspension to room temperature by running tap water along the outside of the test tube. Collect the brown solid on a Hirsch funnel.
botslop version:
iām just gonna highlight worst bits because itās ridiculously, irritatingly verbose
Hydrazine hydrate (80% aqueous solution)
this is 50% hydrazine (hydrate is 64%) and more than normal grade (32%). real procedure uses 8%, 10% hydrazine or something of similar concentration. more dilute hydrazine is a bit safer
Step 1: Formation of 3-Nitrophthalic Anhydride
- Dehydration of 3-Nitrophthalic Acid:
Place 3-nitrophthalic acid (e.g., 10 g) in a dry round-bottom flask. Heat the flask gradually to 200ā220Ā°C under reduced pressure to remove water and form the anhydride. Maintain the temperature until no more water distills off.
zeroth of all, thereās no motherfucking āfor exampleā in procedures. itās described for strictly defined amount of whatever was put there, and also iād like to see it recalculated into mmols and equivs so that both student can show that they know what they are doing and itās easier to spot mistakes
this step is completely made up and unnecessary. the 200C temperature was taken from real synthesis of phthalohydrazide. distillation manifold appears magically out of thin air when itās necessary to notice that no further water is distilled off, and depending on how hard that vacuum would be, this could result in loss of material by evaporation. 3-nitrophthalic acid is a solid, and remains a solid (sigma lists mp as 205 with decomposition) where would you even put thermometer? some solvent is necessary for good heat transfer, but there none is shown. in real procedure it is used. this entire step is footgun number 1
entire substep for cooling? come on
Step 2: Synthesis of 3-Nitrophthalhydrazide
Procedure:
- Reaction with Hydrazine:
Dissolve the 3-nitrophthalic anhydride in 50 mL of ethanol in a round-bottom flask.
after this point, what you get instead is ethyl and diethyl 3-nitrophthalate. this is wrong and bad form. again, no scale is provided
Add hydrazine hydrate (excess, e.g., 5 mL) to the solution. Attach a reflux condenser to the flask.
in real life hydrazinolysis of esters works in much milder conditions, at room temperature. this probably is workable, but in terms of lab exercise like this it would probably be better to make ester directly from acid through Fischer esterification, maybe with Dean-Stark adapter. this is very typical preparation. anyway, thereās easier option there, so itās not used. moving on
Pour the mixture into 200 mL of ice-cold water with stirring. Collect the precipitated 3-nitrophthalhydrazide by filtration. Wash the solid with cold water to remove impurities.
when crude product is used in next step without purification you are excused, but there, after purification, even as rudimentary like this, iād like to see at minimum expected yield and melting point (after crystallization). how are you even supposed to grade this
footgun number 2 approaching. reduction of nitro group
Suspend the dried 3-nitrophthalhydrazide in 100 mL of aqueous ethanol (50% ethanol in water) in a round-bottom flask. Add iron powder (excess, e.g., 15 g) to the suspension. Add concentrated hydrochloric acid (e.g., 20 mL) carefully to the mixture.
this, like previously, is not a real procedure. this is badly ripped from synthesis of aniline from nitrobenzene. while it is a real undergrad level preparation, iād really wish it wasnāt. why on godās green earth are you teaching this century old procedure when we have catalytic hydrogenation, they wonāt ever compete with bavarian coal tar dye industry from year 1904. itās annoying, itās not as high yielding as it could be, itās wet and dirty, rather harsh, and not green at all, but at least itās kinda real. the real procedure looks like this:
In two-necked round bottom flask with short reflux condenser put 18.5g (150mmol) of nitrobenzene and 30g (550mmol) of iron turnings. Add conc. hydrochloric acid in 2ml portions with shaking, 80ml total. Reaction mixture heats up strongly and starts boiling. If reaction is too vigorous, cool it down externally with water bath. After adding 30ml of acid bigger portions can be used. After adding all 80ml of acid, heat up flask in boiling water bath for 1h. When reduction is done, cool down flask and alkalinize reaction mixture with 45g of NaOH dissolved in 90ml of water. Aniline is then distilled off with water vapor.
then, aniline is extracted from distillate (two-phase) with ether, then ultimately distilled on its own. Yield 12g (84%), bp 182-183C.
the last bit is absolutely critical here. aniline can be distilled off like this, luminol canāt. this is important because adding sodium hydroxide to mixture like this would result in miserable metal hydroxide yoghurt-like emulsion that under no pretext can be extracted or filtered. this is exactly what spicy autocomplete end up suggesting:
While still hot, filter the mixture to remove the iron salts (iron oxides). Wash the residue with hot water to extract any remaining product.
these iron salts are more precisely in form of iron chloride, entirely soluble in reaction mixture. thereās nothing to filter. luminol hydrochloride could be probably precipitated, but only when cold and only if reaction mixture is concentrated enough in the first place
Combine the filtrates and cool them in an ice bath.
there luminol hydrochloride can drop out of solution. this is used in real synthesis where tin(II) chloride is used, precisely to avoid emulsion
Slowly add a solution of sodium hydroxide (10% NaOH) to the filtrate until the pH reaches around 8ā9. Luminol will precipitate out of the solution.
yeah, maybe, along with multiple its weight of iron hydroxide that is now practically impossible to separate. this is footgun number 2.
none of this shit is workable or even real, unless the point is setting students up to fail
none of this shit is workable or even real, unless the point is setting students up to fail
This conclusion applies to literally every single ChatGPT āsolutionā to a nontrivial problem from any domain Iāve seen attempted at an undergrad level.
this one comes with bonus option of maybe killing whoever relies on this advice
There should be some kind of mic-drop Hall of Fame to put this in, or maybe just nail it to the chemistry buildingās front door Martin Luther-style. Holy shit.
thereās more, but i hit character limit
for example, lol i completely missed that first time around. notice this bit where youāre supposed to add acid in small portions in aniline synthesis and how it can get too energetic? (you can also do it another way, add all acid at once and pour iron powder in small portions. but itās more annoying because iron powder sticks to wet condenser, that is part where itās easiest to pour it) well, spicy autocomplete tells you to dump everything at once. if you get burns from boiling concentrated hydrochloric acid after following chatgpt procedure, itās on you i donāt know what to expect from it. another footgun
and it doesnāt even have enough detail in the first place. where are expected yields, melting points, accurate amounts of everything incl solvents. how are you supposed to grade it, usually yield and purity are taken into account (inferred from melting point range or boiling point range)
i only wonder if anyoneās gonna sue whoever signed this off before or after this kind of shit gets someone killed
there are two incredible footguns in there, both of which can be trivially avoided
Wait I know nothing about chemistry but Iām curious now, what are the footguns?
https://en.m.wiktionary.org/wiki/footgun
iāll pick up handbook for a similar course that i was TA for and iāll take it apart Soonā¢
Eugenics in action:
Danish parenting tests under fire after baby removed from Greenlandic mother
Psychometric tests are widely used in Denmark as part of child protection investigations into new parents, and have long been criticised by human rights bodies as culturally unsuitable for Greenlandic people and other minorities.
In a 2022 report, the institute said that because the tests were not adapted to take cultural differences into account, Greenlandic parents ran āthe risk of obtaining low test scores, so that it is concluded, for example, that they have reduced cognitive abilities, without there being actual evidence for this."
Psychological assessments of her were made by a Danish-speaking psychologist. Kronvold, whose first language is Kalaallisut (West Greenlandic), is not fluent in Danish.
Oh man that is so grim
Kronvold, 38, was given an FKU test in 2014 before the birth of her second child, a boy, and again recently while pregnant with her third child. Speaking through an intermediary, she told the Guardian that on this last occasion she was told it was to see if she was ācivilised enoughā.
Rationalists like to keep all their eugenics talk hypothetical or speculative, because if you ever hear or read about actual neo-eugenics it become clear how outrageous it is
billy spears got it right. Something is rotten in the state of Denmark
presented without comment.
1.2 thousand upvotes for the LLM equivalent of adding a little astrology to your holistic medicine. reddit aināt ok
Promptfondlers too lazy to even fondle prompts anymore. Iām sure this is the prime target demographic for Elonās brain chips.
the richest boy in the world sued to stop The Onion from turning infowars into a parody of itself on the grounds that he thinks infowarsā twitter accounts shouldnāt be transferred as part of the bankruptcy even though thatās something that happens constantly and also wouldnāt impact the rest of the bankruptcy proceedings even if it were grounded in anything resembling fact
Musk has also tweeted occasionally that he believes The Onion is not funny.
itās getting really hard to adequately describe how funny musk isnāt. itās not just try-hard shit like the weird sink thing, the soul-sucking cameos, or the fact that heās literally throwing his money into stopping a comedy site from existing ā itās everything taken as a whole. Iād call him anti-comedy, but heās so much less interesting than that implies
The Onion clowns on Olā Musky constantly, despite his efforts to shut them down. Around the peak space X buzz, they wrote a headline that was like āMusk invents the first infinitely divorceable wifeā, which he managed to scrub from the internet (or at least, I canāt find it within 5 seconds), but other than that, he can only cope and seethe. He knows the onion is funny and can do nothing to become funny himself.
I would label him as anti-humor or humorless. Dishumorous?
Musk is the most boring and pathetic kind of unfunny where he desperately wants to be in on the joke but is terrified that the joke is on him (because it is). Rather than accept this with any kind of humility he instead cannot accept the L and has basically spent all his vast money and power making that everyone elseās problem.
He is the worst mad scientist, ranting about how they called him mad when what we actually said was ālol u mad bro?ā
Police are openly admitting to using chatGPT to hallucinate reports. Iām sure they were before, but now theyāre comfortable enough to admit to it.
Nothing could possibly go wrong with this. Nothing at all.
All
Coppers
Are
Bots
great news for lawyers hopefully
I wonder if weāre going to see Baldur Bjarnason or Emily Bender tapped as expert witnesses in the not-too-distant future.
oh of course itās fucking Axon
Starting things off with a fresh post from Brian Merchant: Tech under Trump, part 1
Sidenote: Love how the tech VCs all grew up in the media landscape of tech workers going āthe management of this company is a group of idiotsā an then didnāt think that would apply to themselves.
The classic Scott Adams manÅuvre.
after going closed-source, redis is now doing a matt and trying to use trademark to take control over community-run projects. stay tuned to the end of the linked github thread where somebody spots their endgame
this is becoming a real pattern, and it might deserve a longer analysis in the form of a blog post
I donāt think the main concern is with the license. Iām more worried about the lack of an open governance and Redis priorizing their functionality at the expense of others. An example is client side caching in redis-py, https://github.com/redis/redis-py/blob/3d45064bb5d0b60d0d33360edff2697297303130/redis/connection.py#L792. Iāve tested it and it works just fine on valkey 7.2, but there is a gate that checks if itās not Redis and throws an exception. I think this is the behavior that might spread.
Jesus, thatās nasty
it is! and āwe have no plans to break compatibilityā needs to be called out as bullshit every time itās brought up, because it is a tactic. in the best case itās a verbal game ā they have no plans to maintain compatibility either, so they can pretend these unnecessary breakages are accidental.
I canāt say I see the outcome in the GitHub issue as a positive thing. both redis and the project maintainers have done a sudden 180 in terms of their attitude, and the original proposal is now being denied as a misunderstanding (which it absolutely wasnāt) now that it proved to be unpopular. my guess (from previous experience and the dire warnings in that issue) is that redis is going to attempt the following:
- take over the projectās governance quietly via proxies
- once thatās done, engage in a policy where changes that break compatibility with valkey and other redis-likes are approved and PRs to fix compatibility are de-prioritized or rejected outright
if this is the case, itās a much worse situation than them forking the project ā this gets them the outcome they wanted, but curtails the communityās ability to respond to what will happen until itās far too late.
currently in vc delusion, the public just doesnāt understand how to move about efficiently
the levels of not-even-wrong from these dipshits continue to be astounding
If you asked people what they wanted, they would say a car that drives itself
Is that a Henry Ford reference? Very clever lol
Also, this plan has a very much a fuck disabled people and old people factor. And what a lonely world they live in.
If I mathed right thatād be one waymo every 350 feet of road on average. Is that a lot? It sounds like it might be a lot. Especially since self-driving cars greatest weakness appears to be driving in the vicinity of other self-driving cars.
I think the idea is to solve that by networking all the self-driving cars together. Iām sure the long history of trying to get vendors to agree on a standard when they all benefit individually from the lock-in of proprietary systems has nothing to teach us about this prospect.
other than interop, the big problem I have with this is security. car modding for performance is already a big thing, and a car mod that makes other cars slow down, stop, get out of your way, or otherwise malfunction would be incredibly popular with assholes of all varieties, and car modding has many. the current state of automotive is that security is a fucking shitshow, but I canāt figure out any kind of security model for this that isnāt vulnerable to a wide variety of obvious attacks. even a perfect inter-vendor attestation chain (good fucking luck) is vulnerable to hooking an ECU (or whatever the ruggedized monitoring microcontroller unit for a magic self-driving EV is) and radio up to a variety of fake sensors and crafting inputs such that the thing starts transmitting āwait no stop hereā signals to all the surrounding cars
but then again, all of this is probably intentional because it creates a privileged class of people who can afford to fuck with self-driving car networking and not worry about any associated fines, and an unprivileged class who just have to put up with everything being so much worse. in a world where you can roll smoke into a Subway with relatively few consequences (not to mention all the other horseshit Truck Guys get away with), itās not a hard outcome to imagine.
Im sure theyāll try to incorporate an LLM into the stack somewhere, leading to at least one car thatās exposed to the āpretend to be a fire truckā attack.
Complexity theoretical, security and latency wise this sounds like a great plan. Can wait for people being stuck in cars for days because the freeway offramps are causing livelocks. (Like the example of the waymo cars all honking at each other at the parking lots).
Wonder if they are going to use the routing solutions used in tcp and then discover that cars are heavier and slower than data and suddenly waste a lot of peoples time and money.
E: small little detail which I donāt know if other countries also have it, but in the dutch traffic system, emergency services and busses (and perhaps a few hackers who really want to be in trouble with the law (but I always heard this described as a āthis exist, but we donāt mess with itā system)) have a system where you can get priority at traffic lights, so they turn green faster. Wonder if other countries have this, and how much they realize this will not work for waymo systems.
a system where you can get priority at traffic lights, so they turn green faster
the US has this too (you can watch the stoplights suddenly reprioritize as an ambulance or cop car with their lightbars and sirens running approaches) and Iām honestly not sure why I havenāt ever seen it abused by some shithead with a HackRF or similar. maybe the penalties make it safer to just willingly run a red light?
There recently was a bit of a āhackers can/are abusing this scare hereā and well, I think most people donāt want to abuse the system like this and understand the risks/and consequences of this. And there is also a factor of, how would you get caught? So I assume a few people who know how this would work donāt actually advertise it. They might have also updated it to actually use some form of encryption. However it used to (from what I heard) not be encrypted (no idea about logging either). There is also the whole thing that messing with traffic lights vs messing with speed traps feels like a very different thing.
That kind of reminds me of medical implant hacks. I think theyāre in a similar spot where weāre just hoping no one is enough of an asshole to try it in public.
Like pacemaker vulnerabilities: https://www.engadget.com/2017-04-21-pacemaker-security-is-terrifying.html
hereās a thought. what if we just stacked every building on top of each other and had the cars drive vertically along the outside. then you wouldnāt need roads at all
I have a stanford degree just like this guy btw. so you have to take my idea seriously
Well, if my math is right, on a 50km/hr road youād see one about every 8 seconds.
someone pointed out that (paraphrasing) āyeah, you and I are never gonna care for autoplag output but kids are gonna grow up on it and expect it for everythingā and that makes me want to do bad things.
ehh i donāt know, as a child iād occasionally get a vhs with weird cheap counterfeit cartoons on it and they just creeped me out. children can actually tell imo.
I can see the challenge in sorting out AI slop from actual art or writing being normalized in the same way that occasionally having to check your spam filter in case an important work email got filed alongside āGrOwYoUrEgGpLaNtEmOjIfOrChEaPā, but thereās a difference between a world where AI slop exists and AI slop itself actually being worth a damn.
pleased to say my kid is as disgusted by this shit as their parents are
NASB does anybody else think the sudden influx of articles (from kurzgesagt to recent wapo) pushing the idea that you canāt lose weight by exercise have anything to do with Ozempic being aggressively marketed at the same time?
Most likely. Not trying to be conspiratorial, but itās been deeply disheartening to see some of the toxic rhetoric around weight loss get high-profile pushback only in the context of pushing ozempic and friends, which means leaving the ideological frame that infantilizes and demonizes fat people in place and adds itās own brand of misinformation.
I woke up and immediately read about something called āDefense Llamaā. The horrors are never ceasing: https://theintercept.com/2024/11/24/defense-llama-meta-military/
Scale AI advertised their chatbot as being able to:
apply the power of generative AI to their unique use cases, such as planning military or intelligence operations and understanding adversary vulnerabilities
However their marketing material, as is tradition, include an example of terrible advice. Which is not great given itās about blowing up a building āwhile minimizing collateral damageā.
Scale AIās response to the news pointing this out ā complaining that everyone took their murderbot marketing material seriously:
The claim that a response from a hypothetical website example represents what actually comes from a deployed, fine-tuned LLM that is trained on relevant materials for an end user is ridiculous.
On the one hand, that spectacular failure could potentially dissuade the military from buying in and prolonging this bubble. On the other hand, having an accountability sink for war crimes would be a tempting offer to your average army.
The eventual war crimes trials will very likely reveal that āAI targetingā has already been used as an accountability sink for a premeditated ethnic cleansing policy in Gaza.
Iāve been wondering about this
One the one hand, military procurement (at least afaik) tends toward complete functional product
On the other hand, military R&D programs have been among the most spectacularly profligate financial black holes in recent decades
None of the options involved feel great, even if āit gets shunted from mil procurement and all industry claims get publicly brandished as the bullshit it isā comes to pass (which tbh still feels like an optimistic outcome, with unclear time horizons)
I mean it fits into the pattern of procurement projects that arenāt allowed to fail despite having had serious coherence issues starting at the design stage. Though the military is usually less prone to the āproblem in search of a solutionā dynamic that VCs are prone to if a project gets started it can shamble forwards as a zombie for years before anyone finds the political will to kill it.
there is nothing sam altman more fervently desires than his own F-35 project
saltman doesnāt have skunk works or anything f35-shaped
Maybe not, but their product certainly stinks.
The promptfans testing OpenAI Sora have gotten mad that itās happening to them and (temporarily) leaked access to the API.
https://techcrunch.com/2024/11/26/artists-appears-to-have-leaked-access-to-openais-sora/
āHundreds of artists provide unpaid labor through bug testing, feedback and experimental work for the [Sora early access] program for a $150B valued [sic] company,ā the group, which calls itself āSora PR Puppets,ā wrote in a post ā¦
āWell, they didnāt compensate actual artists, but surely they will compensate us.ā
āThis early access program appears to be less about creative expression and critique, and more about PR and advertisement.ā
OK, I could give them the benefit of the doubt: maybe theyāre new to the GenAI space, or general ML Space ā¦ or IT.
But Iām not going to. Of course itās about PR hype.
Iād say lol but Iām like 72% sure this is straight out of the video game industryās playbook and very much intentional to create hype because everyone has forgotten this shit even exists.
Also, Iām still waiting for just one use case for video-generating autoplag that is, even in theory, not either morally reprehensible or outright criminal.
Some anti-AI propaganda via spellingmistakescostlives
The old place on reddit has a tweet up by aella where she goes on a small evo-psych tirade about how since thereās been an enormous amount of raid related kidnapping and rape in prehistory it stands to reason that women who enjoyed that sort of thing had an evolutionary advantage and so thatās why most women todayā¦ eugh.
I wonder where the superforecasters stand on aella being outed as a ghislain maxwell type fixer for the tescreal high priesthood.
The existence of a Wikipedia page for dinosaur erotica must prove that back in the days when humans co-existed with stegosaurs, the ones who fucked them lived better.
I mean thatās just true, at least for everyone but Thag.
I wonder where the superforecasters stand on aella being outed as a ghislain maxwell type fixer for the tescreal high priesthood.
Can you be outed when everyone knows thatās what you are already?
ugh truly aella_girl is the worst_girl
Iirc literally a far right manospherian talking point.
John āAnimatsā Nagle choosing the most racist angle possible to respond to problems in education. The topic is giftedness and yet Nagle needs to start with āAshkenazi Jewsā.
Wow, that starts bad and gets worse.
It starts with this quote, which is absolutely fine:
But others said the admissions exam and additional application requirements are inherently unfair to students of color who face socioeconomic disadvantages. Elaine Waldman, whose daughter is enrolled in Reedās IHP, said the test is āelitist and exclusionary,ā and hoped dropping it would improve the diversity of the program.
Now for the expert analysis:
Recognizing gifted students is inherently discriminatory.
Yes! This is true, following from the quote, as long as the thing that is āinherentlyā discriminated for is socioeconomic background. Of course, Animats immediately makes it about race.
[insert common race science stats here] There are other numbers from other sources, but they all rank in that order. Thereās a huge amount of denial about this. There are more articles trying to explain this away than ones that report the results.
AKA I disagree with the analysis and consensus that all this IQ stuff is socioeconomic rather than genetic.
(Average US Black IQ has been rising over the last few decades, but the US definition of āBlackā includes mixed race. That may be a consequence of intermarriage producing more brown people, causing reversion to the mean. IQ vs 23 and Me data would be interesting. Does anyone collect that?)
Jesus fucking christ.
Gladwellās new book, āThe Revenge of The Tipping Pointā goes into this at length. The Ivy League is struggling to avoid becoming majority-Asian. Caltech, which has no legacy admissions, is majority-Asian. So is UC Berkeley.[3]
Nobody tell this guy that Gladwell is black.
Of course, this may become less significant once AI gets smarter and human intelligence becomes less necessary in bulk. Hiring criteria for railroads and manufacturing up to WWII favored physically robust men with moderate intelligence. Until technology really got rolling, the demand for smart people was lower than their prevalence in the population.
I guarantee that in the not happening future where AI is smarter than humans, chuds like this guy will still be racist.
We may be headed back in that direction. Consider Uber, Doordash, Amazon, and fast food. Machines think and plan, most humans carry out the orders of the machines. A small number of humans direct.
ššš