

We’re as close to quantum computers as we are to ChatGPT becoming sentient.
made you look
We’re as close to quantum computers as we are to ChatGPT becoming sentient.
My favourite thing about Halo FTL is how it handles causality, basically relying on the universe to act like a sponge and “soak it up” as it reconciles it across spacetime.
Send too much mass (aka the Halo array) and it massively slows down travel galaxy wide, as the spacetime is “sodden” and takes longer to reconcile it. Meanwhile in the days leading up to the firing of the Halo array slipspace travel suddenly became easier and quicker than had ever been experienced, as the Forerunner realise that after the firing of the array the amount of slipspace travel in the entire galaxy will be nil.
The calculator leaked 32GB of RAM, because the system has 32GB of RAM. Memory leaks are uncontrollable and expand to take the space they’re given, if you had 16MB of RAM in the system then that’s all it’d be able to take before crashing.
Abstractions can be super powerful, but you need an understanding of why you’re using the abstraction vs. what it’s abstracting. It feels like a lot of them are being used simply to check off a list of buzzwords.
And here, they are donating for a project by DHH, because they like the project
Said project is an Arch installer with some extra packages thrown in by default, not exactly groundbreaking stuff.
There’s 2 ways to make money off sites like Imgur, charge for subscriptions with extra features, or boast about your user numbers and views and get bought by a company that wants to sell ad space.
Imgur did both.
Honestly the Edge collections feature is fantastic for this, but it’s hidden in a sub menu so it feels like they don’t want people to use it.
They’re like a hybrid bookmark and note taking feature, add a group, name it, add tabs to it, add notes to it, reorder it all, etc. Only thing it’s missing is a way to turn a tab group or window into a collection and back again, it’s a manual process currently (add/remote a tab at a time)
Ehh, bots have always presented nonsense UAs to servers. And since modern browsers hard-code the OS version in the UA string, pretending to be an old browser on an old OS could be a (probably ineffectual) way to bypass fingerprinting.
Anything that polls location data can record it and sell it, probably more apps that sell it than don’t.
Or $25 a quarter, and that’s if you buy every single thing they release.
There’s always the whales, but personally I’d skip the “Horse Ranch” expansion, or the one that added Fairies.
The game’s 11 years old, a constant flow of DLC and expansions adds up over time.
So just don’t buy it all at full price I guess.
Funny thing is, it was actually the device they connected that was faulty, the build of Windows they were using just didn’t handle that failure condition at the time.
MS at least learnt that lesson (for the most part), actually test things first.
The headline makes this sound a lot worse than the article does.
From the article there’s basically a list of exemptions in the law that describes who doesn’t need to follow it (e.g. an online booking site for doctors visits), everybody else needs to check the rules to see if they do. And if they do, they then need to follow extra child safety rules (e.g. Roblox is opting out under-16s from open DMs by default)
GitHub can quite rightly say they don’t fall under the restrictions of the law, and that could be the end of it. The simple fact that it doesn’t have any form of private messaging feature is probably enough.
they’re just a radical left communist
God I wish that was remotely true
JXL is two separate image formats stuck together. An improved version of JPEG that can also losslessly and reversibly recode most existing JPEG images at a smaller size, and the PNG like format (evolved from FLIF/FUIF) that can do lossless or lossy encoding.
“VarDCT” (The improved JPEG) turns out to be good enough that the “Modular” mode (The FLIF/FUIF like one) isn’t needed much outside of lossless encoding. One neat feature of modular mode though is that it progressively encodes the image in different sizes, that is if you decode the stream as you read in bytes you start with a small version of the image and get progressively larger and larger output sizes until you get the original.
Why is that useful? Well you can encode a single high DPI image (e.g. 2x scale), and then clients on 1x scale monitors can just stop decoding the image at a certain point, and get a half sized image out of it. You don’t need separate per-DPI variants.
iirc the main reason for QOI was to have a simple format because “complexity is slow”, so by stripping things that the author didn’t consider important the idea was the resulting image format would be quicker and smaller than something like PNG or WebP.
Not sure how well that held up in practice, a lot of that complexity is actually necessary for a lot of use cases (e.g. you need colour profiles unless you’re only ever dealing with sRGB), and I remember a bunch of low hanging fruit optimisations for PNG encoders at the time that improved encoding speed by quite a bit.
AVIF is funny because they kept the worst aspects of WebP (lossy video based encoding), while removing the best (lossless mode) There was an attempt at WebP2, using AV1 and a proper lossless mode, but Google killed that off as well.
But hey, now that they’re releasing AV2 soon, we’ll eventually have an incompatible AVIF2 to deal with. Good thing they didn’t support JPEG-XL, it’d just be too confusing to have to deal with multiple formats.
Lossless is fine, lossy is worse than JPEG.
That’d just be overall worse, it’d never be smaller than a comparable JPEG image, and it wouldn’t allow for any compression/quality benefits.
Yep, their frontend used a shared caller that would return the parsed JSON response if the request was successful, and error otherwise. And then the code that called it would use the returned object directly.
So I assume that most of the backend did actually surface error codes via the HTTP layer, it was just this one endpoint that didn’t (Which then broke the client side code when it tried to access non-existent properties of the response object), because otherwise basic testing would have caught it.
That’s also another reason to use the HTTP codes, by storing the error in the response body you now need extra code between the function doing the API call and the function handling a successful result, to examine the body to see if there was actually an error, all based on an ad-hoc per-endpoint format.
It was Apple. Or rather, regulators and partnering companies leaning on Apple to manage the content on their app store better, including the content that you could find via those apps.
Could say something about how the app stores are a monopoly power, and the chilling effect these wide ranging and heavy handed content policies have, and why the open web (and web apps) are a better option. But we also handed the web over to Google anyway, so it’s not that much better.