Hope they pay their IT guy well.
Hope they pay their IT guy well.
He wasn’t helping them. He was calling out their bullshit. Which is the way it works with people more interested in creating an illusion of competence than pursuing actual competence. They are more interested in hiding issues than fixing them, so someone calling out issues is more of a problem to them than the issues themselves.
And good luck typing that in if you don’t know the alphabet it’s written in and can’t copy/paste it.
Personally, instead of smart bulbs, I’d use smart switches for automating lighting. There’s no need for every bulb to be individually controlled and carry all of the overhead involved in that. On that note, I’d also love to see DC circuits that can take LED bulbs without needing a transformer for each bulb (which tends to be what causes it to fail IIRC).
Just tried looking at the state of the smart switch market and fuck Samsung for naming their app for transferring files from phone to PC “smart switch”. Especially because there’s plenty of ways to do that already that don’t require a shitty Samsung app.
Excluding Samsung from the search, I’d suggest not looking for products directly but finding enthusiast communities that are building their own smart homes. There is more to it than just getting devices that don’t rely on some specific company’s web services. You’ll need to also setup a controller/server, connect all of the devices to that, and then figure out how you want to interact with it (eg via phone, scheduling, voice commands, etc). I haven’t done this myself, but I’m guessing all of these are solved problems, but doubt that anyone would call setting it all up easy.
Software open-sourced, too.
How to Win Friends and Influence People by Dale Carnegie should be required reading for everyone. It’s full of things that are so obvious in hindsight but go against our natural instincts so we blunder through attempts to persuade not realizing that we might be increasing resistance rather than decreasing it.
Like the whole, “you might be right but you’re still an asshole” thing. Being correct just isn’t enough. In some cases you get crucified and then after some time has passed, the point you were trying to convince others of becomes the popular accepted fact. And they might even still hate you after coming around on the point you were trying to make.
That book won’t turn you into a persuasive guru, but it will help avoid many of the pitfalls that make debates turn ugly or individuals stubborn.
Or, on the flip side, you can use the inverse of the lessons to become a more effective troll and learn how to act like you’re arguing one thing while really trying to rile people up or convince them of the opposite. I say this not so much to suggest it but because knowing about this can make you less susceptible to it (and it’s already a part of the Russian troll farm MO).
It all depends on how and what you ask it, plus an element of randomness. Remember that it’s essentially a massive text predictor. The same question asked in different ways can lead it into predicting text based on different conversations it trained on. There’s a ton of people talking about python, some know it well, others not as well. And the LLM can end up giving some kind of hybrid of multiple other answers.
It doesn’t understand anything, it’s just built a massive network of correlations such that if you type “Python”, it will “want” to “talk” about scripting or snakes (just tried it, it preferred the scripting language, even when I said “snake”, it asked me if I wanted help implementing the snake game in Python 😂).
So it is very possible for it to give accurate responses sometimes and wildly different responses in other times. Like with the African countries that start with “K” question, I’ve seen reasonable responses and meme ones. It’s even said there are none while also acknowledging Kenya in the same response.
I’ve found chatGPT to be a great learning aid. You just don’t use it to jump straight to the answers, you use it to explore the gaps and edges of what you know or understand. Add context and details, not final answers.
Also a conservative MO: act with hostility (or neglect) towards a target and then scream very loudly about any pushback and try to frame yourself as a victim to gain support.
When I first heard of the MS feature, my first thought was that there’s gotta be a more efficient way to do this than taking screen shots and analyzing the image. The window manager has all of that information plus more context (like knowing that these pixels are part of a non-standard window that uses transparency to act like a non-rectangular shape, while this thing that looks like a window is actually an image because the user was looking at someone else’s screenshot).
Even better would be integration with the applications themselves; they have even more contextual information than the window manager has.
From my experience, blocking 3rd party cookies in general doesn’t seem to make any difference for site functionality anyways. Though I never log into sites with a Google or FB account other than Google or FB sites (and rarely at all for the latter).
I like the grid add-on for Firefox. It disables 3rd party pretty much anything by default. And you can control cookies separately from everything else, and I can’t remember any time I’ve needed to enable those cookies to get a site working properly (whereas sometimes you need to enable scripting, media, or iframe for cdn or something).
And all he had to do was act like he wanted to back track on the offer and the courts forced the sale through quickly rather than slow things down and consider whether social media should even be a privately owned thing run at the whims of a guy that used that same platform to try to ruin someone’s life with a baseless pedophilia accusation because they hurt he’s feelings when telling him his sub idea wouldn’t work and he was just getting in the way rather than helping anything.
I just wonder if the courts fell for his ploy or if they just played the part they were supposed to and the whole thing was an act.
Also, I don’t think it’s a coincidence that he spent 44 billion on Twitter and then, after pretty much ruining it, for some reason Tesla shareholders (which are majority institutional shareholders) vote through a 50 billion compensation package for him.
Knowing the limits of your knowledge can itself require an advanced level of knowledge.
Sure, you can easily tell about some things, like if you know how to do brain surgery or if you can identify the colour red.
But what about the things you think you know but are wrong about?
Maybe your information is outdated, like you think you know who the leader of a country is but aren’t aware that there was just an election.
Or maybe you were taught it one way in school but it was oversimplified to the point of being inaccurate (like thinking you can do physics calculations but end up treating everything as frictionless spheres in gravityless space because you didn’t take the follow up class where the first thing they said was “take everything they taught you last year and throw it out”).
Or maybe the area has since developed beyond what you thought were the limits. Like if someone wonders if they can hook their phone up to a monitor and another person takes one look at the phone and says, “it’s impossible without a VGA port”.
Or maybe applying knowledge from one thing to another due to a misunderstanding. Like overhearing a mathematician correcting a colleague that said “matrixes” with “matrices” and then telling people they should watch the Matrices movies.
Now consider that not only are AIs subject to these things themselves, but the information they are trained on is also subject to them and their training set may or may not be curated for that. And the sheer amount of data LLMs are trained on makes me think it would be difficult to even try to curate all that.
Edit: a word
I agree. I think if chat gpt kinda like a nerdy friend that knows a lot of things but isn’t an authority or expert on any if it. It has lots of useful information, there’s just a bunch of useless and dangerous information peppered in with it.
Adjust how you move forward with that information based on the stakes. Like if you’re about to bet your life savings because chat gpt said it’s a sure thing, maybe check some other sources, but if you’re going to bet the next round of drinks on something (and can afford that), who cares?
Sounds like a great reason to unionize.
They had faith that people who got to power would use it in good faith (and get there in good faith) while or after having fought a war with a power that they believed wasn’t being used in good faith.
I just wonder how much longer this system can hold up for. It’s got different parts that conflict with itself but different people value different parts of it to the point that getting rid of any of it is going to be, ah, a bit rough.
They could feel like there’s nothing more to lose if it doesn’t make it back but they might be able to claw their way back if it succeeds. “They” being the individuals making the recommendation, not the individuals more concerned about the company overall. If Boeing decides the spaceflight industry isn’t worth the risks, a downsize or complete closing of that part of the company could cost the jobs of those who are the experts in this situation.
So it might not be a case of “we think it’s safe to return”. It might be “returning safely is the only scenario where we aren’t fucked, so let’s roll the dice”.
Or Musk has sycophants running IT who can’t tell the difference.
“So I’ve got this script that will make 20k simultaneous requests and average the response time to determine if we’re being DDOSed. Someone’s got it out for us because there’s a short DDOS attack every single time I run the script! These guys are good, whoever they are! I’ll trace some IPs… Oh shit. The DDOS is coming from inside the building! Better fire some more people. And don’t worry, I’ll be running my DDOS detector script a lot for the interview to make sure we don’t get attacked!”
I don’t think this is difficult technology to figure out.