It still works the same.
It still works the same.
That is true for most current “self driving” systems, because they are all just glorified assist features. Tesla is misleading its customers massively with their advertisement, but on paper it’s very clear that the car will only assist in safe conditions, the driver needs to be able to react immediately at all times and therefore is also liable.
However, Mercedes (I think it was them) have started to roll out a feather where they will actually take responsibility for any accidents that happen due to this system. For now it’s restricted to nice weather and a few select roads, but the progress is there!
Eh it’s not that great.
One million Blackwell GPUs would suck down an astonishing 1.875 gigawatts of power. For context, a typical nuclear power plant only produces 1 gigawatt of power.
Fossil fuel-burning plants, whether that’s natural gas, coal, or oil, produce even less. There’s no way to ramp up nuclear capacity in the time it will take to supply these millions of chips, so much, if not all, of that extra power demand is going to come from carbon-emitting sources.
If you ignore the two fastest growing methods of power generation, which coincidentally are also carbon free, cheap and scalable, the future does indeed look bleak. But solar and wind do exist…
The rest is purely a policy rant. Yes, if productivity increases we need some way of distributing the gains from said productivity increase fairly across the population. But jumping to the conclusion that, since this is a challenge to be solved, the increase in productivity is bad, is just stupid.
So it stops once someone doesn’t finish?
You can literally run large language models with a single exe download: https://github.com/Mozilla-Ocho/llamafile
It doesn’t get much simpler than that.
Addendum:
The docs say
For reproducible outputs, set temperature to 0 and seed to a number:
But what they should say is
For reproducible outputs, set temperature to 0 or seed to a number:
Easy mistake to make
I appreciate the constructive comment.
Unfortunately the API docs are incomplete (insert obi wan meme here). The seed value is both optional and irrelevant when setting the temperature to 0. I just tested it.
Yeah no, that’s not how this works.
Where in the process does that seed play a role and what do you even mean with numerical noise?
Edit: I feel like I should add that I am very interested in learning more. If you can provide me with any sources to show that GPTs are inherently random I am happy to eat my own hat.
Crypto is basically cash for online transactions. Pretty niche, but cool and definitely in demand for some situations.
Just how in the real world you’re shit outta luck if you lose your wallet. Or if you give someone money, but they laugh you in the face you can either cut your losses or try your luck in a fist fight. It’s the same with crypto.
With banks you have a separate authority that can handle all these cases, which is desirable in 99% of all transactions.
Unfortunately it’s volatile af, and the most popular crypto currency (Bitcoin)has untenable transaction costs and transaction limitations (10 transactions per second, globally - what a stupid design decision)
I’ve used it to improve selected paragraphs of my writing, provide code snippets and find an old comic based on a crude description of a friend.
I feel like these interactions were valuable to me and only one (code snippets) could have been easily replaced with existing tools.
Yes it is intentional.
Some interferences even expose a way to set the “temperature” - higher values of that mean more randomized (feels creative) output, lower values mean less randomness. A temperature of 0 will make the model deterministic.
It does not perform very well when asked to answer a stack overflow question. However, people ask questions differently in chat than on stack overflow. Continuing the conversation yields much better results than zero shot.
Also I have found ChatGPT 4 to be much much better than ChatGPT 3.5. To the point that I basically never use 3.5 any more.
No, it’s simply contradicting the claim that it is possible.
We literally don’t know how to fix it. We can put on bandaids, like training on “better” data and fine-tune it to say “I don’t know” half the time. But the fundamental problem is simply not solved yet.
It provides the capability to authenticate yourself online, e.g. for banking services. It would also be able to prove to a website that you are over 18, without telling the website your birthday. I have yet to use it, but from a technical standpoint it’s pretty awesome.
Edit: to clear up some confusion that may exist: as far as I know the app provides the bridge between the chip in the ID card and the application that needs the authentication. No data needs to be stored in the app.
I would love to see this data, can you link it? Either a paper by unaffiliated researchers or the raw data is fine.
I am aware their marketing pushes the “10x better” number. But I have yet to see the actual data to back this claim.
That is the new system. Tesla has no equivalent to it. Or to phrase it differently:
Drivers can not activate teslas’s equivalent technology, no matter what conditions are met, including not in heavy traffic jams, not during the daytime, not on spec ific California and Nevada freeways, and not when the car is traveling less than 40 mph. Drivers can never focus on other activities. The technology does not exist in Tesla vehicles
If you are talking about automatic lane change, auto park, etc (what tesla calls autopilot or full self driving) these are all features you can find in most if not all high end cars nowadays.
The new system gets press coverage, because as I understand it, if there is an accident while the system is engaged Mercedes will assume financial and legal responsibility and e.g. cover all expenses that result from said accident. Tesla doesn’t do that.
Also, it’s hard to argue “full self driving” means anything but the car is able to drive fully autonomously. If they were to market it as “advanced driver assist” I’d have no issue with it.
Definitely won’t get an argument from me there. FSD certainly isn’t in a state to really be called that yet. Although, to be fair, when signing up for it, and when activating it there are a lot of notices that it is in testing and will not operate as expected.
At what point do we start actually expecting and enforcing that people be responsible with potentially dangerous things in daily life, instead of just blaming a company for not putting enough warnings or barriers to entry?
Then the issue is simply what we perceive as the predominant marketing message. I know that in all legally binding material Tesla states what exactly the system is capable of and how alert the driver needs to be. But in my opinion that is vastly overshadowed by the advertising Tesla runs for their FSD capability. They show a 5 second message about how they are required by law to warn you about being alert at all times, before showing the car driving itself for 3 minutes, with the demo driver having the hands completely off the wheel.
It also fails to mention how the accident rate compares to human drivers.
That may be because Tesla refuses to publish proper data on this, lol.
Yeah, they claim it’s ten times better than a human driver, but none of their analysis methods or data points are available to independent researchers. It’s just marketing.
What Tesla is (falsely IMO) advertising as “full self driving” is available in all new Mercedes vehicles as well and works anywhere in the US.
Mercedes is in the news for expanding that functionality to a level where they are willing to take liability if the vehicle causes a crash during this new mode. Tesla does not do that.
I have compact view. I tap on a thumbnail to make the image “full screen”. In short succession I tap on the screen once, then touch and drag, which zooms the image.
Since I have it set to dismiss/leave “full screen” via of images by swiping the image up or down I need the tap before dragging to zoom into the image.