Close Menu
Alan C. Moore
    What's Hot

    Colarado Firebomb Attack: Was pro-Palestine attacker Mohamed Sabry Soliman an ‘illegal alien’?

    June 2, 2025

    11 stabbed at Salem men’s shelter in Oregon; suspect in custody

    June 2, 2025

    16 Mexican migrants detained near San Diego in third major boat interception in months

    June 2, 2025
    Facebook X (Twitter) Instagram
    Trending
    • Colarado Firebomb Attack: Was pro-Palestine attacker Mohamed Sabry Soliman an ‘illegal alien’?
    • 11 stabbed at Salem men’s shelter in Oregon; suspect in custody
    • 16 Mexican migrants detained near San Diego in third major boat interception in months
    • NSF projects cut by DOGE include dance-making in physics, computer science sister circles
    • Lawmakers probe China’s influence on U.S. universities after Stanford ‘espionage’ report
    • TPUSA launches new ‘Prep Year’ program to develop Christian students into campus leaders
    • Athletic co. sues Colorado officials over ‘ability to speak truthfully’ about gender differences
    • One dead, four injured as gunfire erupts between groups during outdoor gathering in Virginia
    Alan C. MooreAlan C. Moore
    Subscribe
    Monday, June 2
    • Home
    • US News
    • Politics
    • Business & Economy
    • Video
    • About Alan
    • Newsletter Sign-up
    Alan C. Moore
    Home » Blog » The Time Sam Altman Asked for a Countersurveillance Audit of OpenAI

    The Time Sam Altman Asked for a Countersurveillance Audit of OpenAI

    May 21, 2025Updated:May 21, 2025 Tech No Comments
    Sam Altman Countersurveillance Audit Business jpg
    Sam Altman Countersurveillance Audit Business jpg
    Share
    Facebook Twitter LinkedIn Pinterest Email
    Some of Sam Altman’s manners were making Dairy Amodei’s AI safety force grow uneasy. Many of the participants in OpenAI’s Microsoft bargain were shocked to learn the amount of the promises Altman had made to Microsoft regarding the technologies it would receive in exchange for its investment immediately after the deal was signed in 2019. The terms of the agreement were inconsistent with what they had anticipated from Altman. They were concerned that if AI safety concerns did arise in OpenAI’s concepts, those commitments would make implementation much more difficult, if not impossible. Amodei’s audience began to doubt Altman’s sincerity in a significant way.

    A member of the group claims that” we’re all logical people.” ” We’re clearly raising funds, and we’re going to do business.” If you’re someone who makes a lot of offers, like Sam, you might say,” All right, this make a deal, this business a point, we’re going to business the next thing.” Finally, if you’re anything like me, you’ll say,” We’re trading a factor we don’t completely know.” It seems to us to commit to a strange position.

    This was presented in light of the company’s growing anxiety over a range of issues. It centered on what they perceived as strengthening data that strong distorted systems could cause fatal outcomes within the Artificial safety contingent. Several of them were a little anxious after one particularly crazy experience. A group of researchers started working on the AI safety project Amodei had wanted to test by using human feedback ( RLHF ) to generate cheerful and positive content and steer the model away from offensive content in 2019 on a model trained after GPT2 with roughly twice the number of parameters.

    However, a scientist made an revise that included a single mistake in his code late one night before allowing the RLHF procedure to work immediately. That mistake was crucial because it was a plus sign flipped from a minus sign to a plus sign, forcing GPT2 to produce more offensive content rather than less. The typo had already wreaked havoc the following morning, and GPT2 was using incredibly obvious and vulgar language to complete every prompt. It was hilarious and even worrying. The scholar then added the phrase Let’s not make a power minimizer to the code base of OpenAI to fix the error after finding it.

    Some employees were also concerned about what would happen if various companies discovered OpenAI’s secret, in part due to the realization that scaling alone could lead to more Artificial advancements. According to them,” The mystery of how our things plays can be written on a grain of rice,” which is written in one words. For the same purpose, they were concerned about prominent abilities being snatched up by bad actors. Leadership tapped into this fear, usually bringing up the threat of North Korea, China, and Russia, while also stressing the need for AGI development to remain in the control of a US organization. Sometimes this insulted non-American people. They would ask the question,” Why did it have to be a US organization” during lunches. recalls a past staff. Why not an example from Europe? Why not choose one from China?

    Some employees frequently returned to Altman’s first analogies between OpenAI and the Manhattan Project during these intoxicating discussions where they debated the long-term effects of AI study. Was OpenAI actually creating a atomic weapons like that? It was an odd contrast to the optimistic, courageous tradition it had created as a largely educational organization. Employees would push back on Fridays after a long year by having songs and wine parties and unwind to the soothing sounds of a revolving cast of coworkers playing the business piano late at night.

    Some individuals became more anxious about seemingly related events as a result of the change in weight. A blogger once followed a person inside the locked driving lot to gain access to the building. An individual discovered an unaccounted USB stick a second time, which raised questions about whether it contained ransomware files, a common vector of harm, and was a result of a cybersecurity breach attempt. The USB ended up being nothing after being examined on an air-gapped machine that had been totally disconnected from the internet. Amodei at least half also used an air-gapped system to create crucial approach documents, connecting the device directly to a printer so that only physical copies can be printed. He was skeptical about how express players could hack into OpenAI’s secrets and create their own potent AI systems.

    One individual recalls that” no one was prepared for this responsibility.” It kept folks awake at night.

    Altman himself was skeptical of anyone leaking knowledge. He was personally concerned about OpenAI’s continued office sharing with Neuralink staff, who are now experiencing more unease following Elon Musk‘s departure. Altman was also concerned about Musk, who posed a large safety blanket, including bodyguards and individual drivers. In an effort to find any bugs that Musk might have left to spy on OpenAI, Altman at one stage secretly ordered an electric countersurveillance inspection in an effort to look into the business for the shortcomings.

    To explain to staff, Altman compared the fear of US adversaries working as quickly as possible to justify why the business needed to be more and less available while remaining as empty as possible. In his vision statement, he said,” We must hold ourselves accountable for a good outcome for the world.” On the other hand, if an autocratic government builds Automation before we do it and uses it, we will have also failed at our quest. We almost certainly need to make quick technological advancement in order to succeed in our goal.

    Karen Hao information in the author’s word at the start of the text that she” contacted all of the major numbers and organizations that are described in this book to ask for interviews and comments.” Sam Altman and OpenAI made the decision to not cooperate. Has even reached Elon Musk for reply, but was unsuccessful in getting a response.


    Source credit

    Keep Reading

    What Is Google One? A Breakdown of Plans, Pricing, and Included Services

    What Is Google One? A Breakdown of Plans, Pricing, and Included Services

    Salesforce ‘Reduced Some Hiring Needs’ With AI: What It Means

    AI-Powered Video Insights Are Coming to Google Drive — Here’s When You’ll Get Them

    New ChatGPT Scam Infects Users With Ransomware: ‘Exercise Extreme Caution’

    Trump Administration Orders US Chip Design Software Firms to Stop Sales to China

    Editors Picks

    Colarado Firebomb Attack: Was pro-Palestine attacker Mohamed Sabry Soliman an ‘illegal alien’?

    June 2, 2025

    11 stabbed at Salem men’s shelter in Oregon; suspect in custody

    June 2, 2025

    16 Mexican migrants detained near San Diego in third major boat interception in months

    June 2, 2025

    NSF projects cut by DOGE include dance-making in physics, computer science sister circles

    June 2, 2025

    Lawmakers probe China’s influence on U.S. universities after Stanford ‘espionage’ report

    June 2, 2025

    TPUSA launches new ‘Prep Year’ program to develop Christian students into campus leaders

    June 2, 2025

    Athletic co. sues Colorado officials over ‘ability to speak truthfully’ about gender differences

    June 2, 2025

    One dead, four injured as gunfire erupts between groups during outdoor gathering in Virginia

    June 2, 2025

    ‘Saw a big flame as high as tree, & someone on fire’: Eyewitnesses recount Colorado attack horror

    June 2, 2025

    Elon Musk vs Cory Booker: A tale of two ‘Nazi salutes’

    June 1, 2025
    • Home
    • US News
    • Politics
    • Business & Economy
    • About Alan
    • Contact

    Sign up for the Conservative Insider Newsletter.

    Get the latest conservative news from alancmoore.com [aweber listid="5891409" formid="902172699" formtype="webform"]
    Facebook X (Twitter) YouTube Instagram TikTok
    © 2025 alancmoore.com
    • Privacy Policy
    • Terms
    • Accessibility

    Type above and press Enter to search. Press Esc to cancel.