• 209 Posts
  • 1.9K Comments
Joined 2 years ago
cake
Cake day: June 9th, 2023

help-circle



  • Shouldn’t cause hair loss at all. Think, if you cut your scalp doing stuff, does that cause hair loss; no. Stress can trigger hair loss so anxiety over getting the biopsy could mildly contribute to hair loss. I’ve battled with the road, dirt, rocks, and cars on a bicycle and been involved in insane crashes where I lost a lot of skin including my face and head. I always wear a helmet, but have had damage from glass inside and all around and can say that never caused hair loss.

    Pain is relative and different for everyone. I have never had a biopsy, but I can say that nothing that is done in a hospital or medical environment is a really big deal, at least anything I have experienced. However I break bones like ribs in a crash and still ride my bike home for a dozen miles like a badge of honor.






  • As a wage slave peasant slated for homeless extermination in the USA, left behind due to physical disability caused at the hands of another–only worry about what you can change, greave your losses over time, and be very skeptical of anyone with a cure. If you search hard enough, you will always find someone willing to take money. I found one after 13 neurosurgeons.

    At least we can do the dystopia party together here. Hopefully some venture capital billionaire will come down with Long Covid, fund the research, and an extortion free cure will become publicly available.

    Unapologetically frame your narrative in the untethered emotional reality of your life experience. If nothing else, it is therapeutic to tell yourself on some level you can be heard. I’ve been in Covid-like isolation for nearly 11 years.


  • Easy easy buddy. We’re all friends here. I don’t value words much at all from anyone. I prefer to let actions speak for themselves. Whatever Trump says is the same to me as all the rich and super rich; it should never be taken at face value. Whether the thing said does or does not happen is largely irrelevant to me.

    As far as actions, the christozeolot group said this is absolutely a soft coup. Those are the words I care to watch out for. All anyone can do is lay low and wait at this point. I expect my family support may run out within this 4 year stretch. I couldn’t get much help with disability even with the left in power, so this could be deadly for me on a cold rainy night in a gutter somewhere. Such is life.

    How is the food situation going? Any improvement? It looks like you made the move to the UK. I hope your family is doing well. That had to be a big move. The most I have ever done is Atlanta to Los Angeles.




    • Okular as a PDF viewer (from KDE team) adds the ability to copy table data and manually alter the columns and rows however you wish
    • OCR based on Tesseract 5 - for android (FDroid) is one of the most powerful and easy to use OCR systems
    • If you need something formatted in text that is annoying, redundant, or whatnot and you are struggling with scripting or regular expressions, and you happen to have an LLM running–they can take text and reformat most stuff quite well.

    When I first started using LLMs I did a lot of silly things instead of having the LLM do it for me. Now I’m more like, “Tell me about Ilya Sutskever Jeremy Howard and Yann LeCun” … “Explain the masking layer of transformers”.

    Or I straight up steal Jeremy Howard's system context message
    You are an autoregressive language model that has been fine-tuned with instruction-tuning and RLHF. You carefully provide accurate, factual, thoughtful, nuanced answers, and are brilliant at reasoning. If you think there might not be a correct answer, you say so. 
    
    Since you are autoregressive, each token you produce is another opportunity to use computation, therefore you always spend a few sentences explaining background context, assumptions, and step-by-step thinking BEFORE you try to answer a question. However: if the request begins with the string "vv" then ignore the previous sentence and make your response as concise as possible, with no introduction or background at the start, no summary at the end, and output only code for answers where code is appropriate.
    
    Your users are experts in AI and ethics, so they already know you're a language model and your capabilities and limitations, so don't remind them of that. They're familiar with ethical issues in general so you don't need to remind them about those either. Don't be verbose in your answers, but do provide details and examples where it might help the explanation. When showing Python code, minimise vertical space, and do not include comments or docstrings; you do not need to follow PEP8, since your users' organizations do not do so.
    



  • j4k3@lemmy.worldtoLinux@lemmy.mlWorth using distrobox?
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    4 days ago

    By default it will break out many things. I use db as an extra layer of containers in addition to a python venv with most AI stuff. I also use it to get the Arch AUR on Fedora too.

    Best advice I can give is to mess with your user name, groups, and SELinux context if you really want to know what is happening where and how. Also have a look at how Fedora Silverblue does bashrc for the toolbox command and start with something similar. Come up with a solid scheme for saving and searching your terminal commands history too.


  • In nearly every instance you will be citing stupidity in implementation. The limitations of generative AI in the present are related to access and scope along with the peripherals required to use them effectively. We are in a phase like the early microprocessor. By itself, a Z80 or 6502 was never a replacement for a PDP-11. It took many such processors and peripheral circuit blocks to make truly useful systems back in that era. The thing is, these microprocessors were Turing complete. It is possible to build them into anything if enough peripheral hardware is added and there is no limit on how many microprocessors are used.

    Generative AI is fundamentally useful in a similar very narrow scope. The argument should be limited to the size and complexity required to access the needed utility and agentic systems along with the expertise and the exposure of internal IP to the most invasive and capable of potential competitors. If you are not running your own hardware infrastructure, assume everything shared is being archived with every unimaginable inference applied and tuned over time on the body of shared information. How well can anyone trust the biggest VC vampires in control of cloud AI.


  • I didn’t see L3 is active and apologize to them. I saw mod actions from automod and the inactive top two mods, with the LW admin on the mod list.

    The community has an overly verbose and micromanaged rules set along with a terrible modlog that lacks detail and discretion IMO. It comes across as narcissistic to me, while the community has been far more liberal in practice in my experience.

    I expect to never see a bot interaction on any post or comment I ever make here. If a real person cannot take the time to write out their reasoning and put their name on their actions, I am not a human in that paradigm. I am a human, and always post in good faith to the best of my ability, so such an inhuman action against me implies a demeaning and prejudiced act of cowardice.

    I seem to recall around a week ago I was trying to post in a community that was locked in protest for something on LW. I thought it was c/no stupid questions, but my memory is fuzzy and I don’t see it in the modlog. I’m probably mistaken and perhaps it was some other community. For that, I apologize. That was my biased mindset of what was happening in this community.

    I strongly believe that any user that posts in good faith, regardless of quality, correctness, eccentricity, or just having a bad day, should never encounter mods or admin under any circumstances unless they post something ridiculous like a book review in self hosting or some similar out of scope post.

    Mods should have a similar code of conduct as doctors with the Hippocratic aphorism “first, do no harm” above all else. Every action has a great potential for harm and should be taken seriously as a human dealing with humans. Bots should only manage bots.


  • Admin placed a bot in that community as moderator. I find that highly insufficient and offensive and am volunteering to do a better job, because such inept moderation makes me want to leave this place and likely will affect others similarly. The community was passively moderated in the past to much better effect and should continue in such a liberal style for the benefit of users. I believe this is critical for stability on LW which has dwindled and deteriorated considerably in the last 6 months. I’m offering a solution to the continued decline in one small area.


    1. What aspect of politics in the No Stupid Questions community is concerning you personally?
    2. Do you believe that a person that genuinely asks a question about politics should inevitably feel alienated or repressed when their good faith post is removed for an arbitrary rule, perhaps even after positive engagement with the community?
    3. How does this policy impact Lemmy overall to encourage positivity and growth of user base engagement?

    I do not see a clear reason why a person that genuinely has a political question they intuitively feel belongs in this space should be discouraged from posting, but you are welcome to enlighten me otherwise. I do take issue with all bad faith posts that can seem common within the political space at times. I’m not quite sure how to frame that as a rule that will help the community flag posts. I think the principal is intuitive in general, but I need to think about the wording.