Firefox also sells personalized ads
Is this in reference to sponsored content on the new tab page?
and tracks your keystrokes.
Telemetry? Or something else?
Firefox also sells personalized ads
Is this in reference to sponsored content on the new tab page?
and tracks your keystrokes.
Telemetry? Or something else?
Well I’m convinced! /s
Idk, pretty sure TinTin is Belgian
My favourite is “absolute free speech!!” combined with “if you say something someone doesn’t like, they are entitled to punch you”
Anyone who says that is forgetting that punching falls under assault.
Hate speech is far beyond merely “something I don’t like”. It is advocating for the oppression and even eradication of people based on their very identity.
Hate speech should not be tolerated if we want to live in a society that tolerates the existence of others. (So called “paradox of tolerance” which is really not a paradox when you frame it as I have). We can tolerate the existence of bigoted assholes but prohibit them spreading their bigotry. Otherwise we live in a society that supports intolerance.
You might be right now that you mention it. 🤔
Agree that other parts of the EM spectrum could enhance the ability of MV to recognize things. Appreciate the insights – maybe I will be able to use this when I get back to tinkering with MV as a hobbyist.
Of course identifying one object is one level. For a general purpose replacement for humans ability, since that’s what the thread is focused (ahem) on, it has to identify tens of thousands of objects.
I need to rethink my opinion a bit. Not only how far general object recognition is but also how one can “cheat” to enable robotic automation.
Tasks that are more limited in scope and variability would be a lot less demanding. For a silly example, let’s say we want to automate replacing fuses in cars. We limit it to cars with fuse boxes in the engine bay and we can mark the fuse box with a visual tag the robot can detect. The layout of the fuses per vehicle model could be stored. The code on the fuse box identifies the model. The robot then used actuators to remove the cover and orients itself to the box using more markers and the rest is basically pick and place technology. That’s a smaller and easier problem to solve than “fix anything possibly wrong with a car”. A similar deal could be done for oil changes.
For general purpose MV object detection, I would have to go check but my guess is that what is possible with state of the art MV is identifying a dozen or maybe even hundreds of objects so I suppose one could do quite a bit with that to automate some jobs. MV is not to my knowledge at a level of general purpose replacement for humans. Yet. Maybe it won’t take that much longer.
In ~15 years in the hobbyist space we’ve gone from recognizing anything of a specified color under some lighting conditions to identifying several specific objects. And without a ton of processing power either. It’s pretty damn impressive progress, really. We have security cameras that can identify animals, people, and delivery boxes. I am probably selling short what MV will be able to do in 15 more years.
All great points. I guess I need to think of this topic more from the “what is possible” mindset rather than the “this is too hard” mindset to get a fair assessment of what is coming. All while still framing it in the sense of improving worker efficiency and automating human tasks piecemeal over time.
Somebody let me know when the Artist Formerly Known as Twitter faces any consequences.
I was told earlier today this was due to a transporter accident.
Damn… nice work on the research! I will read through these as I get time. I genuinely didn’t think there would be much for manual labor stuff. I’m particularly interested in the plumber analysis.
I think augmentation makes a lot of sense for jobs where a human body is needed and it will be interesting to see how/if trade skill requirements change.
I’ll edit this as I read…
Plumbing. The article makes the point that it isn’t all or nothing. That as automation increases productivity, fewer workers are needed. Ok, sure, good point.
Robot plumber? A humanoid robot? Not very likely until enormous breakthroughs are made in machine vision (I can go into more detail…), battery power density, sensor density, etc. The places and situations vary far too greatly.
Rather than an Asimov-style robot, a more feasible yet productivity enhancing solution is automated pipe cutting and other tasks. For example, you go take your phone and measure the pipe as described in the link. Now press a button, walk out to your truck by which time the pipe cutter has already cut off the size you need saving you several minutes. That savings probably means you can do more jobs per day. Cool.
Edit 2
Oil rig worker. Interesting and expected use of AI to improve various aspects of the drilling process. What I had in mind was more like the people that actually do the manual labor.
Autonomous drones, for example, can be used to perform inspections without exposing workers to dangerous situations. In doing so, they can be equipped with sensors that send images and data to operators in real time to enable quick decisions and effective actions for maintenance and repair.
Now that’s pretty cool and will probably reduce demand for those performing inspections (some of whom will have to be at the other end receiving and analyzing data from the robot until such time as AI can do that too.
Autonomous robots, on the other hand, can perform maintenance tasks while making targeted repairs to machinery and equipment.
Again, technologies required to make this happen aren’t there yet. Machine vision (MV) alone is way too far from being general purpose. You can decide a MV system that can, say, detect a coke can and maybe a few other objects under controlled conditions.
But that’s the gotcha.Change the intensity of lighting, change the color temperature or hue of the lighting and the MV probably won’t work. It might also mistake diet coke can or a similar sized cylinder for a Pepsi can. If you want it to recognize any aluminum beverage can that might be tough. Meanwhile any child can easily identify a can in any number of conditions.
Now imagine a diesel engine generator, let’s say. Just getting a robot to change the oil would be nice. But it has to either be limited to a specific model of engine or be able to recognize where the oil drain plug and fill spot is for various engines it might encounter.
What if the engine is a different color? Or dirty instead of clean? Or it’s night, or noon (harsh shadows), overcast (soft shadows), or sunset (everything is yellow orange tinted)? I suppose it could be trained for a specific rig and a specific time of day but that means set up time costs a lot. It might be smarter to build some automated devices on the engine like a valve on the oil pan. And a device to pump new oil in from a vat or standard container or whatever. That would be much easier. Maybe they already do this, idk.
Anyway… progress is being made in MV and we will make far more. That still leaves the question of an autonomous robot of some kind able to remove and reinstall a drain plug. It’s easy for us but you’d be surprised at how hard that would be for a robot.
A big part of it seems to be manipulation of the results? So, like, devs writing tests for more parts of the code base, but ones that are written to always pass.
Ok I’m going to answer my own question because I’m too curious to wait lol
Goodhart’s Law states that “when a measure becomes a target, it ceases to be a good measure.” In other words, when we use a measure to reward performance, we provide an incentive to manipulate the measure in order to receive the reward. This can sometimes result in actions that actually reduce the effectiveness of the measured system while paradoxically improving the measurement of system performance. … The manipulation of measures resulting from Goodhart’s Law is pervasive because direct measures of effectiveness (MOEs), which are more difficult to manipulate, are also more difficult to measure, and sometimes simply impossible to define and quantify. As a result, analysts must often settle for measures of performance (MOPs) that correlate to the desired effect of the MOE. … These negative effects can sometimes be avoided. When they cannot, they can be identified, mitigated, and even reversed.
This report recommends that the organizations that employ analysts should do the following:
[Source]
So how does a company manage anything if they can’t use measurement targets?
Like software engineering. How do you improve productivity or code quality if setting a target value for a measurement doesn’t work?
Well, we sure as shit better not get “involved” on the ground in Iran.
Is it? Maybe I’m out of the loop.
FWIW, EFF just published this:
New Privacy Badger Prevents Google From Mangling More of Your Links and Invading Your Privacy
Better late than never I guess?
Geez I just realized many people probably never lived to see this fixed… fuck. Now I’m sad and feeling extra mortal.
Uh not instantly. Not storewide in a moment. Make it networked and you can change prices per customer as they walk each aisle.
“that guy looks loaded, pop the prices, quick”
Streamline your Price Fixing shenanigans with our Tenant Screwer 3000™
Yeah no. That’s antisemitic bullshit you see literal Nazis spout off about.
This recent “story” was only posted in a shitty tabloid, picked up by a couple other shitty tabloids, and gullible people spread it all over social media.