• 0 Posts
  • 10 Comments
Joined 1 year ago
cake
Cake day: June 1st, 2023

help-circle
  • I try to use both equally, because I’m always on the hook for picking the “doomed” standard in any 50/50 contest.

    I can relate to that. It usually isn’t a coin flip for me though. I’ll align with some technology over another because I truly can see an advantage. That technology might be the underdog from the beginning. Consider that we’re evaluating Firefish vs. Lemmy vs. Kbin whereas all of them combined are the underdog for certain more well established social forums. I engage with all three (and others still), because I don’t know the future.



  • I think a human might consider the meaning about what is being said whereas an LLM is only going to consider what token is the best one to use next. Humans might not be infallible, but they are presently better at detecting obvious BS that would slip undetected past an AI.

    Maybe this is an opportunity we haven’t considered. This is the chance to create a Turing CAPTCHA Test. We can’t use Glorbo to do so, because it has been written, but perhaps it makes sense that there is a nonsensical code phrase people can use to identify AIs, both with markers intentionally added to LLM training models, buried in articles written by human authors, and a challenge/response which is never written down and only passed verbally through real human-human interactions.



  • If a human can access your public repo and read comments posted on public forums, are they stealing your code? LLMs are just aggregators of a great many resources and they aren’t doing anything more than a biological human can already do. The LLM can do so more efficiently than a biological human, while perhaps being more prone to error as it doesn’t completely understand why something is written the way it is. As such any current AI model is prone to signpost errors, but in my experience it has been very good at organizing the broader solution.

    I can give you two examples. I started trying to find out how a .Net API call was made. I was trying to implement a retry logic for a call, and I got the answer I asked. I then realized that the AI could do more for me. I asked it to write the routine for me and it suggested using a library which is well suited for that purpose. I asked that it rewrite it without using an external library and it spit it out. I could have written this completely from scratch, in fact I had already come up with something similar but I was missing the API call I was initially looking for. That said, the result actually had some parts I would have had to go back and add, so it saved me a lot of time doing something I already knew how to do.

    In a second case, I asked if to solve a problem which at its heart was a binary search. To validate that the answer was correct it would need to go one extra step, but to answer the question it wasn’t necessary to actually perform that last validation step. I was looking for the answer 10, but I got the AI to give me answers in the range of 9-11. It understand the basic concepts, but it still needs a biological human to validate what it generates.


  • We have, and there are still things to solve before this is completely practical. This is still different than connecting to a mainframe over a 3270 terminal. A closer example of how this would work is port forwarding an X11 to a remote system or using SSH to tunnel to a server where I’ve ran screen. If I’ve connected to a GUI application running on a server or reconnected my SSH session, it is less important about where I’m connecting from. Extending this concept to Windows, you wouldn’t even need local storage for most needs. It won’t be practical for places with poor network connectivity, but where it is reliable, high bandwidth, and low latency, it won’t be so discernable from local use for most business applications. This is probably the biggest driving force behind XCloud. If Microsoft can make games run across networks with minimal problems, business applications are going to do just fine. XCloud works great for me, allowing me to stream with few problems. That’s less true for others in my family, so clearly this isn’t something which can roll out to everyone, everywhere, all at once. I think it would be great to be able to spin up additional vCPU cores or grow drive space or system RAM as needed per process so that I’m not wasting cycles or underutilizing my hardware. It seems like this would become possible with this sort of platform.


  • For a business, I see this as a strong benefit for this design. The work done for a company is the property of that company by most hiring contracts, so the work done on a remote system can by tightly controlled. At the same time, it would allow someone to use their own thin client to do both professional and personal work and keep things isolated. For someone doing freelance work, it makes sharing a natural extension of that process and access can be granted or revoked as it relates to contracts. That seems like an advantage to corporate IT departments.

    As for individuals, I don’t see how this takes away ownership. Regulations will be updated to allow users to request their data in compliance with GDPR requests, so nothing would become completely locked up. Should that be challenged ever, I don’t think any jurisdiction would say that Microsoft owns the data. What a user will be able to do with the bits they receive is a different question.


  • Long term, there is some benefit to this sort of concept. You aren’t going to have as much freedom to turn your cloud based OS into a custom build, but what you will have is a machine which will never have down time for patches and security updates. The user will be running their app remotely, using all the power and hardware of a data center, and the instance of the app can migrate from one host PC to another, seamlessly without any perception to the end user. Furthermore a user can access all their applications and data from whatever client they are using and it will migrate this session from their terminal, to their phone, to their AR HMDs.

    It isn’t going to be a change which happens over night, and it will be more like how car engine have become less user serviceable but more reliable and efficient. It will be a different experience for sure, but it has potential value beyond being a way to charge people subscriptions.