A man of leisure living in the present, waiting for the future.
Freeing humans from toil is a good idea, just like the industrial revolution was. We just need our system to adapt and change with this new reality, AGI and universal basic income means we could live in something like the society in star trek.
The one that passed: Lexus Teammate with Advanced Drive (2022-24 Lexus LS)
Saved you a click.
He blew up a starship test because he overrode engineers, thought he knew better than them, and didn’t install a blast deflector under the rocket. It made a giant hole in the launch pad and the concrete debris damaged the rocket.
This is further proof of that.
Article says SSD manufacturers currently sell at a loss & intend to raise prices because they want to be profitable, 40% is break even, 50% is profitable
Step one: invent replicators
I really hate this usage of the term, Boomer. Words mean things!
It depends on what kind of character I want to RP; sometimes that’s a dude, sometimes not.
Several stops have been created for ca high speed rail in the valley
I think the whole Twitter debacle has disproven that he’s a good businessman. Rather, it highlights how far one can get by having heritable wealth and being in the right place at the right time.
It’s a very expensive walled garden
They also use seed values (like the current time and the MAC address of the PC’s network interface) to generate numbers that only seem random.
For purposes of this discussion pseudo random with weights is probabilistic, or so close to it that this distinction is irrelevant.
I could be wrong, I’ll keep reading, thanks for the feedback and the citations.
Thanks for citing specifics but I’m still not seeing what you are claiming there, this paper seems to be about the limits of accurate classification of true and false statements in LLM models and shows that there is a linear pattern in the underlying classification via multidimensional analysis. This seems unsurprising since the way LLMs work is essentially taking a probabilistic walk through an array of every possible next word or token based on multidimensional analysis of patterns of each.
Their conclusions, from the paper (btw, Arxive is not peer-reviewed):
In this work we conduct a detailed investigation of the structure of LLM representations of truth.
Drawing on simple visualizations, correlational evidence, and causal evidence, we find strong reason to believe that there is a “truth direction” in LLM representations. We also introduce mass-mean
probing, a simple alternative to other linear probing techniques which better identifies truth directions from true/false datasets.
Nothing about symbolic understanding, just showing that there is a linear pattern to statements defined as true vs false, when graphed a specific way.
From the associated data explorer.:
These representations live in a 5120-dimensional space, far too high-dimensional for us to picture, so we use PCA to select the two directions of greatest variation for the data. This allows us to produce 2-dimensional pictures of 5120-dimensional data.
So they take the two dimensions that differ the greatest and chart those on X/Y, showing there are linear patterns to the differences in statements classified as, “true,” and, “false.” Because this is multidimensional and it’s AI finding patterns there are patterns being matched beyond the simplistic examples I’ve been offering as analogues, patterns that humans cannot see, patterns that extend beyond simple obvious correlations we humans might see in training data. It doesn’t literally need to be trained on statements like “Beijing is in China” and even if it is it’s not guaranteed that it will match that as a true statement. It might find patterns in unrelated words around these, or might associate these words or parts of these words with each other for other reasons.
I’m rather simplifying how LLMs work for purposes of this discussion, but the point stands that pattern matching of words still seems to account for all of this. LLMs, which are probabilistic in nature, often get things wrong. Llama-13B is the best and it still gets things wrong a significant amount of the time.
Those patterns of words can correspond to dimensions of, “true,” or, “false,” (the words/tokens, not the concepts,) more or less through, right? I’m still not seeing why this would be indicative of symbolic understanding rather than sophisticated probabilistic language prediction and correlation.
I have been reading it but I have yet to see anything that indicates the LLM has a concept of truth vs. being good at linguistic pattern matching to return language that accurately classifies true and false statements. i.e., actual understanding of concepts vs. being a surprisingly capable stochastic parrot through multidimensional analysis.
I like the language you used in your explanation. It’s hard to find good analogues to explain why these aren’t intelligent, and it seems most people don’t understand how they work.
I just want to say I appreciate your informed opinions in contrast to the doom and gloomerism combined with class warfare that is so pervasive here.