Toggle light / dark theme

And potentially very embarrassing for all of crypto.

The trial of Sam Bankman-Fried is likely to be more consequential than just whether the man himself is found guilty. Depending on what evidence is introduced during the trial, it could be rough for the entire crypto industry.

“How much damage can this trial do to the already beaten-down reputation of the industry at this point?” asks Yesha Yadav, a law professor at Vanderbilt University. “This trial is going to be an excruciating moment for the industry because no one knows what kind of evidence might come out.”


“Is he going to throw the entire industry under the bus?”

Bankman-Fried’s behavior after the fall of FTX suggests he’s something of a wild card. He may suggest he was acting on the advice of his lawyers. But he may also introduce other evidence that could be troublesome — implying, for instance, that he was engaged in standard industry behavior or that everything that happened was Binance’s fault. That may be risky, but we already know that Bankman-Fried loves risk.

“Is he going to throw the entire industry under the bus?” Wong asks. “An idea like, ‘Everyone was doing this, it’s not fair I’m the only one who was charged?’” That may not fly in a court of law, but it could absolutely damage public perception of crypto at large.

And it will pay legal fees if its customers end up in any lawsuits about it.

Getty Images is so confident its new generative AI model is free of copyrighted content that it will cover any potential intellectual-property disputes for its customers.

The generative AI system, announced today, was built by Nvidia and is trained solely on images in Getty’s image library. It does not include logos or images that have been scraped off the internet without consent.

Numerous natural language processing (NLP) applications have benefited greatly from using large language models (LLMs). While LLMs have improved in performance and gained additional capabilities due to being scaled, they still have a problem with “hallucinating” or producing information inconsistent with the real-world facts detected during pre-training. This represents a significant barrier to adoption for high-stakes applications (such as those found in clinical and legal settings), where the generation of trustworthy text is essential.

The maximum likelihood language modeling target, which seeks to minimize the forward KL divergence between the data and model distributions, may be to blame for LMs’ hallucinations. However, this is far from certain. The LM may assign a non-zero probability to phrases that are not fully consistent with the knowledge encoded in the training data if this goal is pursued.

From the perspective of the interpretability of the model, studies have shown that the earlier layers of transformer LMs encode “lower level” information (such as part-of-speech tags). In contrast, the later levels encode more “semantic” information.

Users should continue to evaluate their settings to understand how they are being tracked.

Following a lengthy examination into its data practices, Google has agreed to pay $93 million to resolve claims that it misled customers about how and when their location information was being tracked and stored. The investigation was led by the California Department of Justice.

This is according to a statement released by California Attorney General Rob Bonta’s office on Thursday.

Google is not unfamiliar to lawsuits. The company has been the target of legal action regarding privacy and data protection throughout the years.

Musk, Zuckerberg, Altman, Gates, and Huang were in attendance.

US lawmakers met with the who’s who of the tech industry on Wednesday to discuss regulations for artificial intelligence and potentially work towards a law that protects US citizens from the dangers of the technology.

In attendance were Tesla CEO Elon Musk, Meta CEO Mark Zuckerberg, Alphabet CEO Sundar Pichai, NVIDIA CEO Jensen Huang, Microsoft CEO Satya Nadella, IBM CEO Arvind Krishna, former Microsoft CEO Bill Gates, and AFL-CIO labor federation President Liz Shuler, reported Reuters.

EU law requires manufacturers to adopt Type-C chargers in phones, tablets, etc by December 2024.

Apple will switch out its proprietary Lightning charging port in iPhones to be compatible with a USB Type-C cable instead, reported.

The EU law says that by 2024-end, all mobile phones, tablets, and cameras sold in countries under the EU will have to be equipped with a USB-C charging port. From spring 2026, the obligation will extend to laptops as well. The law’s overall purpose is to cut down on environmental waste and save consumers an estimated $247 million per annum.

Racially biased artificial intelligence (AI) is not only misleading, it can be right down detrimental, destroying people’s lives. This is a warning University of Alberta Faculty of Law assistant professor Dr. Gideon Christian issued in a press release by the institution.

Christian is most notably the recipient of a $50,000 Office of the Privacy Commissioner Contributions Program grant for a research project called Mitigating Race, Gender and Privacy Impacts of AI Facial Recognition Technology. The initiative seeks to study race issues in AI-based facial recognition technology in Canada. Christian is considered an expert on AI and the law.

“There is this false notion that technology unlike humans is not biased. That’s not accurate,” said Christian, PhD.

Ensuring security in the software market is undeniably crucial, but it is important to strike a balance that avoids excessive government regulation and the burdens associated with government-mandated legal responsibility, also called a liability regime. While there’s no question the market is broken with regards to security, and intervention is necessary, there is a less intrusive approach that enables the market to find the right level of security while minimizing the need for heavy-handed government involvement.

Imposing a liability regime on software companies may go too far and create unintended consequences. The downsides of liability, such as increased costs, potential legal battles, and disincentives to innovation, can hinder the development of secure software without necessarily guaranteeing improved security outcomes. A liability regime could also burden smaller companies disproportionately and stifle the diversity and innovation present in the software industry.

Instead, a more effective approach involves influencing the software market through measures that encourage transparency and informed decision-making. By requiring companies to be fully transparent about their security practices, consumers and businesses can make informed choices based on their risk preferences. Transparency allows the market to drive the demand for secure software, enabling companies with robust security measures to potentially gain a competitive edge.