Just a guy jumping from a hot mess into more prosperous waters.

  • 0 Posts
  • 38 Comments
Joined 1 year ago
cake
Cake day: June 22nd, 2023

help-circle



  • It’s not that difficult, but bad drivers make it difficult for everyone else. Coming to a complete stop should be instinct, it’s a red light after all. But some still treat it like a green because of right on red. They’ll turn up to 24 kph so long as they don’t see obstacles at a glance. This is the danger for pedestrians and oncoming traffic, everything is secondary to the bad driver’s intention. Add the popularity of bigger vehicles which increase the likelihood of fatal crashes and reduce curb visibility, it can be pretty dicey.

    Ideally I’d like to see stronger enforcement for full stop on red. But if we can’t get bad drivers to change I’ll take sitting at the red over an accident any day.




  • That’s fine, but not the primary issue.

    At some point these companies will need to get licenses for any copyrighted work that was part of the training data, or start over with public domain works only. The art may be data, but that data has legal owners whose rights grant control over it’s use.

    Another way to think about is proprietary code. You can see it and learn from it at your leisure. But to use it commercially requires a license, one that clearly defines what can and cannot be done with it, as well as fair compensation.


  • The short version is that it’s a licensing issue. All art is free to view, but the moment you try to integrate it into a commercial product/service you’ll owe someone money unless the artist is given fair compensation in some other form.

    For example, artists agree to provide a usage license to popular art sites to host and display their works. That license does not transfer to the guy/company scraping portfolios to fuel their AI. Unfortunately, as we can see from the article, AI may be able to generate but it still lacks imagination and inspiration; traits fundamental to creating truly derivative works. When money exchanges hands that denies the artist compensation because the work was never licensed and they are excluded from their portion of the sale.

    Another example: I am a photographer uploading my images to a stock image site. As part of ToS I agree to provide a license to host, display, and relicense to buyers on my behalf. The stock site now offers an AI that create new images based on its portfolio. The catch is that all attributed works result in a monetary payment to the artists. When buyers license AI generated works based on my images I get a percentage of the sale. The stock site is legally compliant because it has a license to use my work, and I receive fair compensation when the images are used. The cycle is complete.

    It gets trickier in practice, but licensing and compensation is the crux of the matter.


  • A lot of nuance will be missed without some gradation between “I <3 China” and “Down with Pooh!” For example, if we added “Slightly favorable”, “Neutral”, and “Slightly unfavorable” we would begin to see just how favorable younger generations are. Rather than presume there is a deep divide on trade policy, if two bars are almost equal, we may see they are largely neutral. Similarly we could see just how favorable their views of TikTok really are by looking at the spread between neutral to “I <3 China!”


  • The short version is that there are two images and sidecar/xmp file sandwiched into one file. First is the standard dynamic range image, what you’d expect to see from a jpeg. Second is the gain map, an image whose contents include details outside of SDR. The sidecar/xmp file has instructions on how to blend the two images together to create a consistent HDR image across displays.

    So its HDR-ish enough for the average person. I like this solution, especially after seeing the hellscape that is DSLR raw format support.







  • It is pretty idiotic imo that the music industry can ban people from showing song lyrics. Iirc you have to get a license to list song lyrics since they’re technically a copyrighted work.

    Here’s the thing, if its copyright-able you can get a license for it. Amazon already has licenses to sell and stream music, that part of the usage agreement was already negotiated. A simple analogy would be you want to buy three games from a store, you pay for two but leave with three. Obviously the store is not happy with you. You’ve shown you’re legally compliant with two games, yet took the third without paying.

    But there are some interesting caveats in the article:

    The lawsuit, which is the first from a music publisher against an AI company over the use of lyrics, was filed in the wake of the Authors Guild — representing a host of prominent fiction authors including George R.R. Martin, Jonathan Franzen and John Grisham — suing OpenAI last month.

    This makes sense since lyrics aren’t all that different from poetry, and whole albums could be considered a collection of short works. So loosening the copyright protections may give AI companies more data to work with, but it would end up hurting authors (lyricists, screen writers, novelists) and related fields. A real world fallout would be SAG-AFTRA strikers losing royalties and bargaining power, while empowering and enriching the big studios’ own AI models.

    I wanted to see if Anthropic, the company being sued, has the money on hand to pay for licenses, to square up legally if you will. Well, doesn’t look like Anthropic is hurting for cash as of 3rd quarter 2023.

    Amazon said on Monday that it’s investing up to $4 billion into the artificial intelligence company Anthropic in exchange for partial ownership and Anthropic’s greater use of Amazon Web Services (AWS), the e-commerce giant’s cloud computing platform.

    Even if the licenses were 10 million in total, that would leave 3,990,000,000 on hand; or .0025% of what Amazon offered. I don’t see how they’d walk away without settling for the licensing fees and legal expenses. They’re financially secure and partially owned by a company that is legally compliant with its own handling of intellectual property.


  • Shazbot@lemmy.worldto196@lemmy.blahaj.zoneAI rule
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    It really depends on what the site’s terms of service/usage agreement says about the content posted on the site.

    For example a site like Art Station has users agree to a usage license that lets them host and transmit the images in their portfolios. However there is no language saying the visitors are also granted these rights. So you may be able to collect the site text for fair use, but the art itself requires reaching out for anything other than personal/educational use.



  • If we apply the current ruling of the US Copyright Office then the prompt writer cannot copyright if AI is the majority of the final product. AI itself is software and ineligible for copyright; we can debate sentience when we get there. The researchers are also out as they simply produce the tool–unless you’re keen on giving companies like Canon and Adobe spontaneous ownership of the media their equipment and software has created.

    As for the artists the AI output is based upon, we already have legal precedent for this situation. Sampling has been a common aspect of the music industry for decades now. Whenever an musician samples work from others they are required to get a license and pay royalties, by an agreed percentage/amount based on performance metrics. Photographers and film makers are also required to have releases (rights of a person’s image, the likeness of a building) and also pay royalties. Actors are also entitled to royalties by licensing out their likeness. This has been the framework that allowed artists to continue benefiting from their contributions as companies min-maxed markets.

    Hence Shutterstock’s terms for copyright on AI images is both building upon legal precedent, and could be the first step in getting AI work copyright protection: obtaining the rights to legally use the dataset. The second would be determining how to pay out royalties based on how the AI called and used images from the dataset. The system isn’t broken by any means, its the public’s misunderstanding of the system that makes the situation confusing.