I saw a Copilot prompt in MS PowerPoint today - top left corner of EVERY SINGLE SLIDE - and I had a quiet fit in my cubicle. Welcome to hell.
I saw a Copilot prompt in MS PowerPoint today - top left corner of EVERY SINGLE SLIDE - and I had a quiet fit in my cubicle. Welcome to hell.
Isn’t it magnesium chloride? More ions = better melting.
Please do not put ice salt on your food in a pinch
The article features a picture of bikers, but didn’t mention transit options. I’d be so much happier if I didn’t have a 45 minute commute. Turns my 40-45hr/wk job into a 46-55hr/wk job, depending on traffic. Also, my blood pressure would be lower…
Personally, I don’t think the evidence has established an AI tutor is better than no tutor. Maybe for English, certainly not for Math, Science, History, Art, or any other subjects.
LLMs can be wrong - so can teachers, but I’d bet dollars to donuts the LLM is wrong more often.
Unfortunately, I don’t think we’re close to getting an AI music tutor. From a practical standpoint, a large language model can’t interpret music.
From an ethos standpoint, allowing AI to train a musician breaks my heart.
Is it a physical copy? If so, it’s likely related to flame retardant used in its manufacturing.
I’ll need the full peer-reviewed paper which, based on this article, is still pending? Until then, based on this blog post, here’s my thoughts as someone who’s education adjacent but scientifically trained.
Most critically, they do not state what the control arm was. Was it a passive control? If so, of course you’ll see benefits in a group engaged in AI tutoring compared to doing nothing at all. How does AI tutoring compare to human tutoring?
Also important, they were assessing three areas - English learning, Digital Skills, and AI Knowledge. Intuitively, I can see how a language model can help with English. I’m very, very skeptical of what they define as those last two domains of knowledge. I can think of what a standardized English test looks like, but I don’t know what they were assessing for the latter two domains. Are digital skills their computer or typing proficiency? Then obviously you’ll see a benefit after kids spend 6 weeks typing. And “AI Knowledge”??
ETA: They also show a figure of test scores across both groups. It appears to be a composite score across the three domains. How do we know this “AI Knowledge” or Digital Skills domain was not driving the effect?
ETA2: I need to see more evidence before I can support their headline claim of huge growth equal to two years of learning. Quote from article: “When we compared these results to a database of education interventions studied through randomized controlled trials in the developing world, our program outperformed 80% of them, including some of the most cost-effective strategies like structured pedagogy and teaching at the right level.” Did they compare the same domains of growth, ie English instruction to their English assessment? Or are they comparing “AI Knowledge” to some other domain of knowledge?
But, they show figures demonstrating improvement in scores testing these three areas, a correlation between tutoring sessions and test performance, and claim to see benefits in everyday academic performance. That’s encouraging. I want to see overall retention down the line - do students in the treatment arm show improved performance a year from now? At graduation? But I know this data will take time to collect.
Growing up, sure, I enjoyed Hunger Games and a handful of others. But my least favorite trope of all time is the Myers Briggs Proxy that Sorts You Into a Clique Forever. Houses, Guilds, Colors, whatever it is, I hate it. But that’s just my personal taste.
There’s so much great fiction out there. I’m a sucker for a sci fi or fantasy epic series.
I’m willing to try any book - if I can find it at my local library. But, if I’m not captured within the first couple of chapters, I’ll stop reading.
Last book I dropped was Red Rising. It was recommended to me by a friend who I convinced to start reading Sanderson, so i figured it was worth a try. Turns out I’m not a fan of the edgy YA fiction, Hunger Games but Horny and Violent genre.
Grapefruit interacts with specific metabolic pathways in the liver. Most medications are broken down by the liver. That’s just how the body works, unfortunately
I’ve seen publishers advertise their other titles within the box, which honestly, not an issue for me. These, however, are crossing a line.
Shells or coral could serve as early tools, but (just my opinion) I feel it’s a little human-centric to assume fire and metallurgy are required to progress. Just because we did it that way, doesn’t mean another species would have to.
If anyone’s interested in this sort of speculative sci fi, check out A Mountain in the Sea by Ray Nayler. 10/10 world building, 9/10 science backing, 6/10 writing.
See Alk’s comment above, I touched on medical applications.
As for commercial uses, I see very few. These devices are so invasive, I doubt they could be approved for commercial use.
I think the future of Brain Computer Interfacing lies in Functional Near Infrared Spectroscopy (FNIRS). Basically, it uses the same infrared technology as a pulse oximeter to measure changes in blood flow in your brain. Since it uses light (instead of electricity or magnetism) to measure the brain, it’s resistant to basically all the noise endemic to EEG and MRI. It’s also 100% portable. But, the spatial resolution is pretty low.
HOWEVER, the signals have such high temporal resolution. With a strong enough machine learning algorithm, I wonder if someone could interpret the signal well enough for commercial applications. I saw this first-hand in my PhD - one of our lab techs wrote an algorithm that could read as little as 500ms of data and reasonably predict whether the participant was reading a grammatically simple or complex sentence.
It didn’t get published, sadly, due to lab politics. And, honestly, I don’t have 100% faith in the code he used. But I can’t help but wonder.
A traditional electrode array needs to be as close to the neurons as possible to collect data. So, straight through the dura and pia mater, into the parenchyma where the cell axons and bodies are hanging out. Usually, they collect local data without getting any long distance information - which is a limiting factor to this technology.
The brain needs widespread areas to work in tandem to get most complicated tasks done. An electrode is great for measuring motor activity because those are pretty localized. But, something like memory and language? Not really possible.
There are electrocorticographic devices (ECoG) that places electrodes over a wide area and can rest on the pia mater, on the surface of the brain. Less invasive, but you still need a craniotomy to place the device. They also have less resolution.
The most practical medical purpose I’ve seen is as a prosthetic implant for people with brain/spinal cord damage. Battelle in Ohio developed a very successful implant and has since received DARPA funding: https://www.battelle.org/insights/newsroom/press-release-details/battelle-led-team-wins-darpa-award-to-develop-injectable-bi-directional-brain-computer-interface. I think that article over-sells the product a little bit.
The biggest obstacle to invasive brain-computer implants like this one is their longevity. Inevitably, any metal electrode implanted in the brain gets rejected by the immune system of the brain. It’s a well-studied process where a glial scar forms, neurons move away from the implant, and the overall signal of the device decreases. We need advances in biocompatibility before this really becomes revolutionary.
ETA: This device avoids putting metal in the brain and instead the device sends axons into the brain. Certainly a novel approach which runs into different issues. The new neurons need to be accepted by the brain, and they need to be kept alive by the device.
If they move the cell bodies into the brain and then had the device house axons and dendrites (neuron input and output), they could maybe let the brain keep the device alive. But that is a much more difficult installation procedure
Fantastic question, like Will_a said, I’ve never seen a device designed for input to the brain like this.
In this particular example, if someone were to compromise the device, even though it’s not able to “fry” their brain with direct electricity, they could overload the input neurons with a ton of stimulus. This would likely break the device because the input neurons would die, and it could possibly cause the user to have a seizure depending on how connected the input was to the users brain.
That does bring to mind devices like the one developed by Battelle, where the device reads brain activity and then outputs to a sleeve or cuff designed to stimulate muscles. The goal of the device is to act as a prosthesis for people with spinal cord injuries. I imagine that device was not connected to the internet in any way, but worst case scenario and a hacker compromises the device, they could cause someone’s muscle to sieze up.
Agree, fascinating question. To be precise, they used genetically modified neurons (aka optogenetics) to test if the device can deliver a signal into the brain. Optogenetics incorporates neurons modified with light-sensitive channel proteins, so the neuron activates when a precise wavelength of light is “seen” by the special protein. One of the coolest methods in neuroscience, in my opinion.
“To see if the idea works in practice they installed the device in mice, using neurons genetically modified to react to light. Three weeks after implantation, they carried out a series of experiments where they trained the mice to respond whenever a light was shone on the device. The mice were able to detect when this happened, suggesting the light-sensitive neurons had merged with their native brain cells.”
Oh neat, another brain implant startup. I published in this field. If anyone has questions, I’m happy to answer.
Oh, you think that’s bad? Just wait until you meet the sensory homunculus.
https://cdn.britannica.com/27/239527-050-1BE6DCAF/Sensory-cortical-homunculus-model.jpg?w=400&h=225&c=crop