My Garmin also shows the shape of the graph, but to be honest I don’t trust it at that resolution. I just keep track of the moving average, which is the main value that is shown. I do agree that that kind of data shouldn’t be hidden from the user.
My Garmin also shows the shape of the graph, but to be honest I don’t trust it at that resolution. I just keep track of the moving average, which is the main value that is shown. I do agree that that kind of data shouldn’t be hidden from the user.
I think that the idea is that by setting a strict deadline after which women can’t have children or marry, they are forced to start a family now or risk regretting it later. That’s the only way I can make sense of this bizarre scenario.
It’s off by default, but activated when you end your search query with a question mark. That option can be turned off.
But 2K and 4K do refer to the horizontal resolution. There’s more than one resolution that’s referred to as 2K, for example 2048 x 1080 DCI 2K, but also 1920 x 1080 full HD, since it’s also almost 2000 pixels wide. The total number of pixels is in the millions, not thousands.
For 4K some common resolutions are 4096 x 2160 DCI 4K and 3840 x 2160 UHD, which both have a horizontal resolution of about 4000 pixels.
Using English is the only way that all my colleagues are able to read it, but if it’s just meant for you, or only for Spanish speaking people, I’d say why not.
That’s not as effective, since it can’t block anything that’s hosted from a hostname that also serves regular content without also blocking the regular content. It also can’t trick websites into thinking that nothing is blocked and it can’t apply cosmetic rules. I use it for my devices, but in browsers I supplement it with uBlock Origin (or whatever is available in that browser).
That’s because Bitwarden used various methods to enable auto-fill in places where the native auto-fill capability of Android doesn’t work. See https://bitwarden.com/help/auto-fill-android/ for an explanation.
Sure, but I’m just playing around with small quantized models on my laptop with integrated graphics and the RAM was insanely cheap. It just interests me what LLMs are capable of that can be run on such hardware. For example, llama 3.2 3B only needs about 3.5 GB of RAM, runs at about 10 tokens per second and while it’s in no way comparable to the LLMs that I use for my day to day tasks, it doesn’t seem to be that bad. Llama 3.1 8B runs at about half that speed, which is a bit slow, but still bearable. Anything bigger than that is too slow to be useful, but still interesting to try for comparison.
I’ve got an old desktop with a pretty decent GPU in it with 24 GB of VRAM, but it’s collecting dust. It’s noisy and power hungry (older generation dual socket Intel Xeon) and still incapable of running large LLMs without additional GPUs. Even if it were capable, I wouldn’t want it to be turned on all the time due to the noise and heat in my home office, so I’ve not even tried running anything on it yet.
The only time I can remember 16 GB not being sufficient for me is when I tried to run an LLM that required a tad more than 11 GB and I had just under 11 GB of memory available due to the other applications that were running.
I guess my usage is relatively lightweight. A browser with a maximum of about 100 open tabs, a terminal, a couple of other applications (some of them electron based) and sometimes a VM that I allocate maybe 4 GB to or something. And the occasional Age of Empires II DE, which even runs fine on my other laptop from 2016 with 16 GB of RAM in it. I still ordered 32 GB so I can play around with local LLMs a bit more.
I’m not going to defend Apple’s profit maximization strategy here, but I disagree. Most people won’t end up buying a cable and adaptare because they already have one, and in contrast to those pieces made of plastic and metal, the packaging is mostly made of paper. I’m pretty confident that the reduction in plastic and metal makes up for the extra packaging that’s produced for the minority that does buy a cable and/or adapter.
Telegram’s “privacy” is fully based on people trusting them not to share their data - to which Telegram has full access - with anyone. Well, apart from the optional E2EE “secret chat” option with non-standard encryption methods that can only be used for one on one conversations. If it were an actual privacy app, like Signal, they could’ve cooperated with authorities without giving away chat contents and nobody would’ve been arrested. I’m a Telegram user myself and I from a usability standpoint I really like it, but let’s be realistic here: for data safety I would pick another option.
I would look into how Matrix handles this, for example. It involves unique device keys, device verification from a trusted device, and cross-signing. It’s not just some private key that’s spread around to random new devices where you lose track of.
They’ve implemented it in such a way that you only have access to an encrypted chat on a single device, so no syncing between devices. Syncing E2EE chats across devices is more difficult to pull off, but it’s definitely possible and other services do that by default.
I don’t understand the relevance of what you’re saying. Do you mean that the platform should have the right to allow biological females only (following the definitions of your law system)? Do you think that that’s implied when a platform is female only and defensible in court? Not a snarky remark, just genuinely curious what you mean. This case was all about gender identity discrimination and I don’t see how biological sex fits into the picture.
She had sued the platform and its founder Sally Grover in 2022 for unlawful gender identity discrimination in its services, and claimed Ms Grover revoked her account after seeing her photo and “considered her to be male”.
Judge Robert Bromwich said in his ruling that while Ms Tickle was not directly discriminated against, her claim of indirect discrimination was successful as using the Giggle App required her “to have the appearance of a cisgender woman”.
Judge Bromwich said the evidence did not establish Ms Tickle was excluded from Giggle directly “by reason of her gender identity although it remains possible that this was the real but unproven reason”.
“Uncensored”: https://x.com/KarlMaxxer/status/1823753493783699901. I don’t know if this is really true, but if it is, it’s something that they should’ve called out in their article.
Right, I see what you mean now. I misread your comment as explaining something that was already clear.
A false positive is when it incorrectly determines that a human written text is written by AI. While a detection rate of 99.9% sounds impressive, it’s not very reliable if it comes with a false positive rate of 20%.
The kernel is written in C and a bit of assembly. When support for a new language is added, that’s big news and worthy of headlines. The other languages are just there for various tools and helper scripts and are not used for kernel code, so not newsworthy by any means.
I might be wrong, but I think they will probably let the OS handle the biometrics offline, which means that they won’t have access to your biometrics, they just work with cryptographic keys. Otherwise it doesn’t make sense, as apps usually don’t have direct access to the fingerprint reader. It will probably be similar to how a passkey works.
Exactly. Same as with sleeping data. When it says that you’ve been awake 3 times last night, it doesn’t really mean much. That kind of data shouldn’t be presented as being accurate. However, it could still be made accessible behind a button er menu option. For example, it might show you that the signal is intermittent because your watch band isn’t tight enough, or other anomalies. And of course you’re right: they won’t tell you that the data is of low quality and as a user you don’t necessarily know that, so in that sense it can be very misleading.