My naming convention for C++ is that custom types are capitalized and instances aren’t. So I might write User user;
.
My naming convention for C++ is that custom types are capitalized and instances aren’t. So I might write User user;
.
So far “more data” has been the solution to most problems, but I don’t think we’re close to the limit of how much useful information can be learned from the data even if we’re close to the limit of how much data is available. Look at the AIs that can’t draw hands. There are already many pictures of hands from every angle in their training data. Maybe just having ten times as many pictures of hands would solve the problem, but I’m confident that if that was not possible then doing more with the existing pictures would also work.* Algorithm design just needs some time to catch up.
*I know that the data that is running out is text data. This is just an analogy.
What occasions are you referring to? I know people claim that Israeli use of white phosphorous munitions is illegal, but the law is actually quite specific about what an incendiary weapon is. Incendiary effects caused by weapons that were not designed with the specific purpose of causing incendiary effects are not prohibited. (As far as I can tell, even the deliberate use of such weapons in order to cause incendiary effects is allowed.) This is extremely permissive, because no reasonable country would actually agree not to use a weapon that it considered effective. Something like the firebombing of Dresden is banned, but little else.
Incendiary weapons do not include:
(i) Munitions which may have incidental incendiary effects, such as illuminants, tracers, smoke or signalling systems;
(ii) Munitions designed to combine penetration, blast or fragmentation effects with an additional incendiary effect, such as armour-piercing projectiles, fragmentation shells, explosive bombs and similar combined-effects munitions in which the incendiary effect is not specifically designed to cause burn injury to persons, but to be used against military objectives, such as armoured vehicles, aircraft and installations or facilities.
The issue I have with referring to the current situation as a bubble is that this isn’t just hype. The technology really is amazing, and far better than what people had been expecting. I do think that most current attempts to commercialize it are premature, but there’s such a big first-mover advantage that it makes sense to keep losing money on attempts that are too early in order to succeed as soon as it is possible to do so.
Multiple studies are showing that training on data contaminated with LLM output makes LLMs worse, but there’s no inherent reason why LLMs must be trained on this data. As you say, people are aware of it and they’re going to be avoiding it. At the very least, they will compare the newly trained LLM to their best existing one and if the new one is worse, they won’t switch over. The era of being able to download the entire internet (so to speak) is over but this means that AI will be getting better more slowly, not that it will be getting worse.
I don’t disagree, but before the recent breakthroughs I would have said that AI is like fusion power in the sense that it has been 50 years away for 50 years. If the current approach doesn’t get us there, who knows how long it will take to discover one that does?
It would be odd if AI somehow got worse. I mean, wouldn’t they just revert to a backup?
Anyway, I think (1) is extremely unlikely but I would add (3) the existing algorithms are fundamentally insufficient for AGI no matter how much they’re scaled up. A breakthrough is necessary which may not happen for a long time.
I think (3) is true but I also thought that the existing algorithms were fundamentally insufficient for getting to where we are now, and I was wrong. It turns out that they did just need to be scaled up…
The important thing here isn’t that the AI is worse than humans. It’s than the AI is worth comparing to humans. Humans stay the same while software can quickly improve by orders of magnitude.
This is what international law has to say about incendiary weapons:
- It is prohibited in all circumstances to make the civilian population as such, individual civilians or civilian objects the object of attack by incendiary weapons.
- It is prohibited in all circumstances to make any military objective located within a concentration of civilians the object of attack by air-delivered incendiary weapons.
- It is further prohibited to make any military objective located within a concentration of civilians the object of attack by means of incendiary weapons other than air-delivered incendiary weapons, except when such military objective is clearly separated from the concentration of civilians and all feasible precautions are taken with a view to limiting the incendiary effects to the military objective and to avoiding, and in any event to minimizing, incidental loss of civilian life, injury to civilians and damage to civilian objects.
- It is prohibited to make forests or other kinds of plant cover the object of attack by incendiary weapons except when such natural elements are used to cover, conceal or camouflage combatants or other military objectives, or are themselves military objectives.
This treeline is clearly not located within a concentration of civilians and it is concealing (or plausibly believed to be concealing) enemy combatants and therefore the use of incendiary weapons is unambiguously legal.
Maybe Meta doesn’t want to get the Google treatment.
I think the more like explanation is that being able to filter out AI-generated text gives them an advantage over their competitors at obtaining more training data.
Netscape Navigator is spelled “Firefox”.
I think IBM was different because its lunch was eaten almost entirely by other American companies (chiefly Microsoft). That probably wouldn’t be the case if Intel were allowed to declined in a similar manner.
I think they’ll recover. Letting them fail would be a national security problem.
Bring back Internet Explorer.
Oh, I just assumed he had bluish ejaculate.
If I was in some sort of distress and someone sent me a comforting message and I later found out they had ChatGPT write the message for them I think I would abandon the friendship as a pointless endeavor
My initial response is the same as yours, but I wonder… If the intent was to comfort you and the effect was to comfort you, wasn’t the message effective? How is it different from using a cell phone to get a reminder about a friend’s birthday rather than memorizing when the birthday is?
One problem that both the AI message and the birthday reminder have is that they don’t require much effort. People apparently appreciate having effort expended on their behalf even if it doesn’t create any useful result. This is why I’m currently making a two-hour round trip to bring a birthday cake to my friend instead of simply telling her to pick the one she wants, have it delivered, and bill me. (She has covid so we can’t celebrate together.) I did make the mistake of telling my friend that I had a reminder in my phone for this, so now she knows I didn’t expend the effort to memorize the date.
Another problem that only the AI message has is that it doesn’t contain information that the receiver wants to know, which is the specific mental state of the sender rather than just the presence of an intent to comfort. Presumably if the receiver wanted a message from an AI, she would have asked the AI for it herself.
Anyway, those are my Asperger’s musings. The next time a friend needs comforting, I will tell her “I wish you well. Ask an AI for inspirational messages appropriate for these circumstances.”
Middle right panel is a cock and balls. (OP is into needle play.)
Good riddance to HR rubbish.
Nothing can fix things because teenagers will not cooperate. If Instagram could identify all its teenage users, those users would move to a platform that couldn’t. The only thing the restrictions achieve is a reduction in the market share of the platform with the restrictions.