Their “manifesto”:

Superintelligence is within reach.

Building safe superintelligence (SSI) is the most important technical problem of our time.

We have started the world’s first straight-shot SSI lab, with one goal and one product: a safe superintelligence.

It’s called Safe Superintelligence

SSI is our mission, our name, and our entire product roadmap, because it is our sole focus. Our team, investors, and business model are all aligned to achieve SSI.

We approach safety and capabilities in tandem, as technical problems to be solved through revolutionary engineering and scientific breakthroughs. We plan to advance capabilities as fast as possible while making sure our safety always remains ahead.

This way, we can scale in peace.

Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures.

We are an American company with offices in Palo Alto and Tel Aviv, where we have deep roots and the ability to recruit top technical talent.

We are assembling a lean, cracked team of the world’s best engineers and researchers dedicated to focusing on SSI and nothing else.

If that’s you, we offer an opportunity to do your life’s work and help solve the most important technical challenge of our age.

Now is the time. Join us.

Ilya Sutskever, Daniel Gross, Daniel Levy

  • _sideffect@lemmy.world
    link
    fedilink
    English
    arrow-up
    30
    arrow-down
    1
    ·
    6 days ago

    Lmao, no it’s not in reach.

    More tech bro bullshit just to get fools to invest and him get rich (which will work)

    • Pilferjinx@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      6 days ago

      LLMs are a start. I can see it being the machine/human interface to a broard array of specialized applications. If we want to see true ai we’ll need to add efficient complexity, and perhaps, if we don’t want it to exclusively be contained in a platonic type realm, we’ll need our programs direct access and explore our physical one.

  • Alphane Moon@lemmy.worldOP
    link
    fedilink
    English
    arrow-up
    27
    arrow-down
    6
    ·
    edit-2
    6 days ago

    This honestly looks like a grift to get a nice salary for a few years on VC money. These are not random sales goons peddling shit they don’t understand. They don’t even bother to define “superintelligence”, let alone what they mean by “safe superintelligence” .

    I find it hard to believe this wasn’t written with malicious intent. But maybe I am too cynical and they are so used to people kissing their asses, that they think their shit doesn’t smell. But money definitely plays some role in this, they would be stupid to not cash in while the AI hype is hot.

    • webghost0101@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      3
      ·
      6 days ago

      There are very little people in the world that understand llms on such a deep technological level as Ilya.

      I honestly don’t think there is much else in the world he is interested in doing other then work on aligning powerful ai.

      Wether his almost anti commercial style end up accomplishing much i don’t know but his intention are literal and clear.

      • Alphane Moon@lemmy.worldOP
        link
        fedilink
        English
        arrow-up
        9
        arrow-down
        4
        ·
        6 days ago

        What do you mean by anti-commercial style? I am not from North America, but this seems like pretty typical PR copytext for local tech companies. Lot’s of pomp, banality, bombast and vague assertions of caring about the world. It almost reads like satire at this point, like they’re trying to take the piss.

        If his intentions are literal and clear, what does he mean by “superintelligence” (please be specific) and in what way is it safe?

        • webghost0101@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          11
          arrow-down
          3
          ·
          edit-2
          6 days ago

          Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures.

          This is the guy who turned against Sam for being to much about releasing product. I don’t think he plans on delivering much product at all. The reason to invest isn’t to gain profit but to avoid losing to an apocalyptic event which you may or may not personally believe, many Silicon Valley types do.

          A safe Ai would be one that does not spell the end of Humanity or the planet. Ilya is famously obsessed with creating whats basically a benevolent AI god-mommy and deeply afraid for an uncontrollable, malicious Skynet.

          • Alphane Moon@lemmy.worldOP
            link
            fedilink
            English
            arrow-up
            6
            arrow-down
            1
            ·
            6 days ago

            I don’t consider tech company boardroom drama to be an indicator of anything (in of itself). This is not some complex dilemma around morality and “doing the right thing”.

            Is my take on their PR copytext unreasonable? Is my interpretation purely a matter of subjectivity?

            Why should I buy into this “AI god-mommy” and “skynet” stuff? Guy can’t even provide a definition of “superintelligence”. Seems very suspicious for a “top mind in AI” (paraphrasing your description).

            Don’t get me wrong, I am not saying he acts like a movie antagonist IRL, but that doesn’t mean we have any reason to trust his motives or ignore the long history of similar proclamations.

            • webghost0101@sopuli.xyz
              link
              fedilink
              English
              arrow-up
              5
              ·
              edit-2
              6 days ago

              No i applaud a healthy dose of skepticism.

              I am everything but in favor of idolizing silicon valley gurus and tech leaders but from Sutskeva i have seen enough to know he is one of the few to actually pay attention to

              Artificial Super intelligence or ASI is the step beyond AGI (artificial general intelligence)

              The later is equal or better in capacity to a real human being in almost all fields.

              Artificial Super intelligence is defined (long before openai was a thing) as transcending human intelligence in every conceivable way. At which point its a fully independent entity that can no longer be controlled or shutdown.

              • Alphane Moon@lemmy.worldOP
                link
                fedilink
                English
                arrow-up
                1
                ·
                6 days ago

                Thank you for the clarification regarding ASI. That still leaves the question of the definition of “safe ASI”; a key point that is emphasized in their manifesto.

                To use your example it’s like an early mass market car industry professional (say in 1890) discussing road safety and ethical dilemmas in roads dominated by regular drivers and a large share of L4/L5 cars (with some of them being used as part-time taxis). I just don’t buy it.

                Mind you I am not anti-ML/AI. I am an avid user of “AI” (ML?) upscaling (specifically video) and to lesser extent stable diffusion. While AI video upscaling is very fiddly and good results can be hard to get right, it is clearly on another level with respect to quality compared to “classical” upscaling algorithms. I was truly impressed when I was able to run by own SD upscale with good results.

                What I am opposed to is oligarchs, oligarch-wanabees, shallow sounding proclamations of grandiose this or that. As far as I am concerned it’s all bullshit and they are all to one degree or another soulless ghouls that will eat your children alive for the right price and the correct mental excuse model (I am only partially exaggerating, happy to clarify if needed) .

                If one has all these grand plans for safe ASI, concern for humanity and whatnot, setup a public repo and release all your code under GPL (and all relevant documentation, patent indemnification, no trademark tricks etc.). Considering Sutskever’s status as AI royalty who is also allegedly concerned about humanity, he would be the ideal person to pull this off.

                If you can’t do that, then chances are you’re lying about your true motives. It’s really as simple as that.

                • webghost0101@sopuli.xyz
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  6 days ago

                  No need to clarify what you meant with the oligarchs theres barely any exaggeration there. Ghouls is quite accurate.

                  Considering the context of a worst case possible scenario (hostile takeover by an artificial superior) which honestly is indistinguishable from general end of the world doomerism prophecies but very alive in the circles of Sutskeva i believe safe ai consistent of the very low bar of

                  “humanity survives wile agi improves the standards of living worldwide” of course for this i am reading between the lines based on previously aquired information.

                  One could argue that If ASI is created the possibilities become very black and white:

                  • ASI is indifferent about human beings and pursues its own goals, regardless of consequences for the human race. It could even find a way off the planet and just abandon us.

                  • ASI is malaligned with humanity and we become but a. Resource, treating us no different then we have historically treated animals and plants.

                  • ASI is aligned with humanity and it has the best intentions for our future.

                  For either scenario it would be impossible to calculate its intentions because by definition its more intelligent then all of us. Its possible that some things that we understand as moral may be immoral from a better informed perspective, and vice versa.

                  The scary thing is we wont be able to tell wether its malicious and pretending to be good. Or benevolent and trying to fix us. Would it respect consent if say a racist refuses therapy?

                  Of course we can just as likely hit a roadblock next week and the whole hype dies out for another 10 years.

          • moon@lemmy.ml
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            4
            ·
            6 days ago

            I don’t think he plans on delivering much product at all

            Well good news. If the product you’re imagining is ‘Skynet’ or a ‘god-mommy’ both of those are science fiction and we don’t need whatever this bullshit is to save us

            • webghost0101@sopuli.xyz
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              4
              ·
              edit-2
              6 days ago

              Your entitled to that opinion, so are others. Sutsekeva may be an actual loony… or an absolute genius. Or both, that isn’t up to debate here.

              I am just explaining what this about because if you think this is “just another money raiser” you obviously havent paid enough attention to who this guy is exactly.

              Super intelligence in Artificial intelligence is a wel defined term btw, in case your still confused. You may have seen them plaster on stuff like buzzwords but all of these definitions precede AI hype of last years.

              ML = machine learning, algorithms that improve over time.

              AI = artificial intelligence, machine learning with complex logic, mimicking real intelligence. <- we are here

              AGI = artificial general intelligence. An Ai agent that functions intelligently at a level indistinguishable from a real human. <- expert estimate this will be archived before 2030

              ASI = Artificial Super Intelligence Agi that transcends human intelligence and capacities in every wat.

              It may not sound real to you but if you ever visited the singularity sub on reddit you will see how a great number of people do.

              Also everything is science fiction till its not. Horseless chariots where science fiction so where cordless phones. The first airplane went up in 1903, 66 years later we landed in the moon.

              • moon@lemmy.ml
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                2
                ·
                6 days ago

                The point is not that we can’t imagine speculative technologies. The point is that this is a grift which distracts from the real and present threat of AI like the threats to privacy, artists’ livelihoods and the internet itself which is being poisoned by LLM generated content

  • Imgonnatrythis@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    12
    ·
    6 days ago

    I don’t know about y’all, but a company called “safe super intelligence” sure doesn’t sound like it could ever do anything sinister. Should probably go ahead and let this one train on government databases.

    • SonicDeathTaco@lemm.ee
      link
      fedilink
      English
      arrow-up
      4
      ·
      6 days ago

      My not involved in human trafficking t-shirt is raising a lot of questions already answered by my shirt

  • Alphane Moon@lemmy.worldOP
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    6 days ago

    Just noticed that the cropped image makes it look like he is doing a nazi salute and then the first sentence of their “manifesto” is “Superintelligence is within reach.” :)

  • Visstix@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    6 days ago

    The amount of AI companies is slowly passing the amount of cryptocurrencies. What’s gonna be the new flavor of the year?

  • glimse@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    2
    ·
    6 days ago

    I feel kind of bad commenting on his physical appearance but as a guy who balded in the same pattern…fuckin shave your head, dude. Or, since you’re rich as fuck, spent the 10k for a transplant. It looks so bad and not in a “wow it’s ugly but he can sure pull it off” kind of way. More like an “I never rescheduled the appointment I missed at the barber” kind of way

  • zingo@lemmy.ca
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    6 days ago

    Call me skeptical, but haven’t you guys watched Terminator 2?

    These guy will end up exploding their own facility with a hand detonator, when skynet becomes our overlords. You want to join that crowd?

    I’m always going to Selfhosted my shit. Tell you that right now.

    • Alphane Moon@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      6 days ago

      Unfortunately Terminator 2 is bit childish and naive in its script.

      More realistically, these individuals will try and fly away with oligarch Peter Thiel to his end of the world bunker in New Zealand.

      If in some fucked up reality this ever happens (IMO there are far more pressing problems in the world), I hope the New Zealanders will have a very long and unpleasant surprise in store for these individuals.