

Trick the algorithm by reporting all MMA content too.
Trick the algorithm by reporting all MMA content too.
it doesn’t work once you leave the atmosphere.
Fun fact: just this past week an experiment on a lunar lander confirmed that GPS signals can be detected from the surface of the moon. I don’t know if those signals can give any kind of location precision, but it is an interesting finding.
The origin story of Brave is entirely right-wing. He was forced out of Mozilla because of his public stances on political topics. It’s no secret that after being forced out for his politics, he went on to create a new browser company.
just realised
“Just”? No, he’s always been open about this, and that’s why his appointment as Mozilla CEO was so controversial in 2014, and why the board revolted and he ended up resigning 11 days into his tenure.
The whole origin story of Brave is steeped in right wing politics.
What if I told you that there are really stupid comments on Lemmy as well
I think in terms of cultural exchange of ideas and the enjoyment of being on the internet, 2005-2015 or so was probably the best. The barrier to entry was lowered to where almost anyone could make a meme or post a picture or upload a video or write a blog post or even a microblog post or forum comment of a single sentence and it might go viral through the power of word of mouth.
Then when there was enough value in going viral people started gaming for that as a measure of success, so that it no longer was a reliable metric for quality.
But plenty of things are now better. I think maps and directions are better with a smartphone. Access to music and movies is better than ever. It’s nice to be able to seamlessly video chat with friends and family. There’s real utility there, even if you sometimes have to work around things that aren’t ideal.
I get your perspective, but I think it’s inaccurate when applied to current consumer behavior. The iPhone market share is like 60%. You can’t tell me that 60% is inherently more consumerist than the 40% that is Android users, especially when we’re talking about how Apple users actually tend to keep their phones longer before upgrading/updating to a new phone.
Especially when we’re talking about the mid-tier, non-flagship model in the lineup, like the non-Pro iPhones.
Plenty of people want small but powerful phones. The iPhone Mini line, for the 12 and 13 generation, offered the same features and processing power as the regular sized iPhone. But they didn’t offer as much as the “Pro” model, which came in both normal and “Max” sizes.
So if you wanted the latest and greatest in CPU/GPU, camera sensors/lenses, display tech (not necessarily size), you tended to opt for the phone that just happened to be bigger.
Basically, there’s never been a side by side comparison of the latest tech that actually happens to fit within the size of the first 5 generations of iPhone, versus the standard size of a flagship today.
The United States, Canada, and Japan have the requirement, too. Australia’s takes effect in November 2025.
They’ve basically brought over the broken ladder of the management track, over to the technical track of increased technical expertise (without necessarily increasing management/administrative responsibilities).
Currently, each generation of executives doesn’t come from within the company. There’s no simple path from mail room to executive anymore. Now, you have to leave the company to go get an MBA, then get hired by a consulting firm, then consult with that company as a client, before you’re on track to make senior management at the company.
If the technical track is going this way, too, then these companies are going to become more brittle, and the current generation of entry level workers are going to hit a lot more career dead ends. It’s bad for everyone.
No, I don’t think you owe an apology. It’s a super common terminology almost to the point where I wouldn’t really even consider it outright wrong to describe it as a SoC. It’s just that the blurred distinction between a single chip and multiple chiplets packaged together are almost impossible for an outsider to tell without really getting into the published spec sheets for a product (and sometimes may not even be known then).
It’s just more technically precise to describe them as SiP, even if SoC functionally means something quite similar (and the language may evolve to the point where the terms are interchangeable in practice).
When I plug my phone into the wall, there are chips in the wall charger and on both sides of the cable, because the simple act of charging requires a handshake and an exchange of information notifying the charger, the cable, and the phone what charging modes are supported, and how to ask for more or less power.
Seriously? Am I the only one thinking this could be done with less than 10 chips at most?
How many chips are in a fully configured desktop computer? There’s like dozens of any given motherboard, controlling all the little I/O requirements. Each module of RAM is several chips. If you use external cards, each card will have a few chips, too. Meanwhile, the keyboard and the mouse each have a few chips, and the display/monitor has a bunch more.
I’d be surprised if the typical computer had less than 100 chips.
Now let’s look at the car functions. A turn signal that blinks, oscillating between on and off? That’s probably a chip. A windshield wiper that can do intermittent wiping at different speeds? Another chip or more. Variable valve timing that’s electronically controlled? Another few chips. Each sensor that detects something, from fuel tank status to engine knocking to air/fuel mixture? Probably another chip. Controllers that combine all this information to determine how to mix the fuel and air, whether to trigger a warning light on the dash, etc.? Probably more chips. What about deployment of airbags, or triggering of the anti-lock braking systems? Cruise control requires a few more chips, as speedometers and odometers are not electronic rather than the old analog systems. Smart cruise control and lane detection has even more chips. Hybrid drivetrains that charge or discharge batteries need dozens of chips controlling the flow of power (and the logic of when power should flow in which direction).
By the time Toyota was in the news in 2011 for potential throttle sticking problems that killed people, it was typical for even economy cars to have something like 30 ECUs controlling different things, with each ECU and its associated sensors requiring multiple chips.
Some modern perks require even more chips. Automatic lights? High beam dimming? Automatic wipers? Remote start or shutting off the engine at idle?
And that’s just for driving. FM tuner? Chips. AM tuner? More chips. Bluetooth and Carplay/Android Auto? More chips. Rear view camera, now mandated on all cars? More chips. A built-in GPS or infotainment system? A full blown computer.
All the little analog controllers that were present in cars in the 80’s are now more efficiently performed on integrated circuits, including analog circuits. Each function will require its own chip. If you’re trying to recreate the exact functionality of a typical car from the 1990’s, you’d probably still need a minimum of a few hundred chips to pull it off. And it’s probably smart to segment things so that each module does one thing in a specialized way, isolated from the others, lest an unexpected input on the radio mess up the spark plug timing.
The world is run by chips, and splitting up the functions into multiple computers/controllers, with multiple chips each, is just the easier and more efficient way to do things.
Tags interfere with human readability. Open any markdown file with a text editor in plain text and you can basically read the whole thing as it was intended to be read, with possibly the exception of tables.
There’s a time and a place for different things, but I like markdown for human readable source text. HTML might be standardized enough that you can do a lot more with it, but the source file itself generally isn’t as readable.
That’s why I think the history of the U.S. phone system is so important. AT&T had to be dragged into interoperability by government regulation nearly every step of the way, but ended up needing to invent and publish the technical standards that made federation/interoperability possible, after government agencies started mandating them. The technical infeasibility of opening up a proprietary network has been overcome before, with much more complexity at the lower OSI layers, including defining new open standards regarding the physical layer of actual copper lines and switches.
the only option for top performance will be a SoC
System in a Package (SiP) at least. Might not be efficient to etch the logic and that much memory onto the same silicon die, as the latest and greatest TSMC node will likely be much more expensive per square mm than the cutting edge memory production node from Samsung or whatever foundry where the memory is being made.
But with advanced packaging going the way it’s been over the last decade or so, it’s going to be hard to compete with the latency/throughout of an in-package interposer. You can only do so much with the vias/pathways on a printed circuit board.
Also as a result, that opens up Apple’s discounting strategy where it sells the one-year-old model as a discounted model. If an Apple model can get updates 6 years after release, then buying an 18-month old model (but as a new phone) still assures you of 4.5 years of updates.
I’d argue that telephones are the original federated service. There were fits and starts to getting the proprietary Bell/AT&T network to play nice with devices or lines not operated by them, but the initial system for long distance calling over the North American Numbering Plan made it possible for an AT&T customer to dial non-AT&T customers by the early 1950’s, and set the groundwork for the technical feasibility of the breakup of the AT&T/Bell monopoly.
We didn’t call it spam then, but unsolicited phone calls have always been a problem.
I hear it’s amazing when the famous purple stuffed worm in flap-jaw space with the tuning fork does a raw blink on Hari Kiri Rock.
(the preview fetch is not e2ee afaik)
Technically, it is, but end to end encryption only covers the data between the ends, and not what one of the ends chooses to do with it. If one end of the conversation chooses to log the conversation in an insecure way, the conversation itself might technically be encrypted, but the contents of the conversation can be learned by another. Or if one end simply chooses to forward a message to a new party not part of the original conversation.
The link previews are happening outside of the conversation, and that action can be seen by people like the owner of the website, your ISP, and maybe WhatsApp itself (if configured in that way, not sure if it does).
So end to end isn’t a panacea. You have to understand how it fits into the broader context of security and threat models.
Four years? You gotta pump those numbers up. Those are rookie numbers.