top of page

Updated: Aug 3, 2024

This week the EU regulation 2024/1689, "laying down harmonised rules on artificial intelligence" became effective. The European Artificial Intelligence Act will regulate the use of artificial intelligence with an aim towards protecting the rights and the safety of EU citizens.


The regulation does not seek to restrict spam or AI that suggests products to consumers. It will require chatbots to disclose to people who they are communicating with that they are in fact AI and not a human being. Generative AI, such images or video created by AI, will need to be flagged as content created by artificial intelligence.



Paragraph 30 of EU regulation 2024/1689 forbids using biometric data to predict a person's sexual orientation, religion, race, sexual behavior, or political opinions, although it provides for an exception for filtering through biometric data to comply with other EU and member nation laws - specifically noting that police forces can sort images by hair or eye color to identify suspects.


Paragraph 31 addresses the prohibition of social scoring systems, which use AI to evaluate the trustworthiness of an individual:


AI systems providing social scoring of natural persons by public or private actors may lead to discriminatory outcomes and the exclusion of certain groups. They may violate the right to dignity and non-discrimination and the values of equality and justice. Such AI systems evaluate or classify natural persons or groups thereof on the basis of multiple data points related to their social behaviour in multiple contexts or known, inferred or predicted personal or personality characteristics over certain periods of time. The social score obtained from such AI systems may lead to the detrimental or unfavourable treatment of natural persons or whole groups thereof in social contexts, which are unrelated to the context in which the data was originally generated or collected or to a detrimental treatment that is disproportionate or unjustified to the gravity of their social behaviour. AI systems entailing such unacceptable scoring practices and leading to such detrimental or unfavourable outcomes should therefore be prohibited. That prohibition should not affect lawful evaluation practices of natural persons that are carried out for a specific purpose in accordance with Union and national law.


The People's Republic of China's Social Credit System helps to put individual debtors on blacklists, but is usually used to enforce regulations against companies.


The Act requires AI systems used for healthcare or employee recruitment to be monitored by humans, and ensure that they use high quality data. High risk AI systems will need to be registered in a database maintained by the EU and receive a declaration of conformity.


High risk AI systems will have to have a CE marking (physical or digital) to show that they conform with the Act. A CE 'conformité européenne' marking looks like this:



. . . it is used widely to show that the product conforms with health and safety regulations.


AI developers will not have to fully comply with the EU AI Act until August 2, 2027.

This month the S.D.N.Y. dismissed much of the SEC's fraud suit against the software developer SolarWinds Corp. The SAML certificate [which exchanges authentication and authorization data between parties] for SolarWinds' information technology infrastructure platform, Orion, was compromised and malicious actors were able to gain access to the networks of government agencies that used Orion.


The SEC had alleged that SolarWinds failed to disclose information about the SUNBURST cyberattack in 2020 quickly enough. In his decision, Op. & Order, SEC v. SolarWinds Corp., No. 1:23-cv-09518-PAE (S.D.N.Y. July 18, 2024), ECF No. 125, Judge Paul Engelmayer, sustained a claim of fraud based on the SolarWinds Security Statement, but dismissed claims of fraud based on other filings.


In discussing whether or not cybersecurity risk disclosures made in a SolarWinds' SEC filings about its Orion platform used for IT infrastructure were adequate, the Court considered whether or not two previous incidents in which attacks allowed its platform to contact unauthorized external websites meant that it had been subject to a systematic attack. The two incidents were different in that in one Orion was exploited to send data about the network it was installed on, and in the other Orion was used to download malware. Because SolarWinds could not find the root cause of the attacks, and could not be certain that they were associated with one another, it was not required to update its cybersecurity risk disclosure.


To the extent the SEC, in terming the disclosure generic, means to fault Solar Winds for not spelling out these risks in greater detail, the case law does not require more, for example, that the company set out in substantially more specific terms scenarios under which its cybersecurity measures could prove inadequate. As decisions in this District have recognized, the anti-fraud laws do not require cautions to be articulated with maximum specificity. Indeed, these decisions have recognized policy reasons not to require as a matter of law that disclosures be made at the level of specificity known to the issuer. Spelling out a risk with maximal specificity may backfire in various ways, including by arming malevolent actors with information to exploit, or by misleading investors based on the formulation of the disclosure or the disclosure of other risks at a lesser level of specificity.


Id. at 73. (emphasis added).


The Court also rejected the SEC's claim on SolarWinds' post-SUNBURST disclosures. "As to post-SUNBURST disclosures, the Court dismisses all claims. These do not plausibly plead actionable deficiencies in the company's reporting of the cybersecurity hack. They impermissibly rely on hindsight and speculation." Id. at 3. Judge Engelmayer found unpersuasive the SEC's allegation that the failure to state in a Form 8-K filing (made days after the discovery of the SUNBURST breach) that malicious code had been used in the two prior attacks made the filing materially misleading.



If you try to analyze a load file in Excel 365, you may have noticed that you don't get the option to set the text qualifier. When you imported a delimited text file, or .csv file, the previous versions of Excel would let you choose a specific text qualifier - a character that set off text fields between delimiters so that when the same delimiter, such as a comma, was used in within an imported field, an error would not result.



However, if you try to import a load file into Excel 365, by using 'From Text/CSV' or 'Get Data . . . From File' this option will be missing:



You can set a delimiter but not a text qualifier. If you want to import data into Excel with the old options, go to File . . . Options . . . and check off the option for 'From Text (Legacy)'.



Now when you go to Data . . . Get Data . . ., you'll see an option for Legacy Wizards:



. . . this will allow you to import data using the old wizard which gives you the option to set a text qualifier.



Sadly, Excel doesn't allow you to enter a custom text qualifier.


Sean O'Shea has more than 20 years of experience in the litigation support field with major law firms in New York and San Francisco.   He is an ACEDS Certified eDiscovery Specialist and a Relativity Certified Administrator.

​

The views expressed in this blog are those of the owner and do not reflect the views or opinions of the owner’s employer.

​

If you have a question or comment about this blog, please make a submission using the form to the right. 

Your details were sent successfully!

© 2015 by Sean O'Shea . Proudly created with Wix.com

bottom of page