Keeping it real: Long Island professionals use AI cautiously

After years of accelerated investment in AI, many remain skeptical of the stultifying promises made by the most zealous purveyors of the technology. Companies say AI hasn’t brought significant changes to the way they run their businesses, while 80% of respondents to a recent National Bureau of Economic Research (NBER) survey of almost 6,000 CEOs, CFOs and executives worldwide reported “no impact on either employment or productivity” from AI.

And of course, the impact of AI and how it’s used, depends upon who you ask.

Sean Murray, CEO of QuadSci in New York City, believes AI can be a powerful tool to empower workers in certain fields, instead of an existential crisis for human labor. “Many AI solutions claim they have full automation or solve a small slice of a problem while taking credit for solving the entire problem,” he says. “Using QuadSci does not require a massive change in how [owners] operate their businesses.”

QuadSci, which bills itself as “the most predictive and prescriptive AI for customer intelligence,” is a collection of B2B applications that perform marketing analytics for software companies. The firm recently raised $8 million in capital investment, which they say will be “used to accelerate product development, expand go-to-market teams, and support continued growth with enterprise customers and strategic partners globally.”

Its service revolves around the analysis of telemetry, which is data collected by software applications on how users interact with software on a granular level, including time spent on pages, where users click, and how users are navigating between different pages.

“Traditionally that data goes to engineering teams and is used to monitor performance to ensure the systems are working,” explains Murray. “Using AI, we translate telemetry into sales and marketing signals to give [owners] the ability to know where new sales and marketing opportunities reside within their customers. Once sales and marketing teams know where those opportunities are, the telemetry can also tell very specifically what to sell next and how to sell it.”

Greg Demetriou, CEO of Lorraine Gregory Communications (LGC) in Edgewood, acknowledges that AI is already used in many facets of the day-today business at his marketing company, while stressing that the lion’s share of executive and creative decisions are made by people. “We use it to support our team’s work by enhancing research, data analysis and content ideation, helping us surface insights and identify emerging trends,” he says. “Our creative and LGC Studios teams also use a variety of tools… to assist with tasks like generating visual concepts, cleaning audio, polishing existing voiceovers, upscaling lower-quality video, and logging new video footage.”

AI plays a more sizable role in the company’s production of written communications, according to Demetriou, who says large language models (LLMs) such as Open AI’s ChatGPT, Microsoft’s Copilot, and Anthropic’s Claude, are used to “brainstorm, outline, and refine early drafts of proposals and marketing copy,” in addition to summarizing background material or “exploring alternative phrasing”.

“However, all final content, strategic messaging, and brand alignment are developed and reviewed by our communications professionals,” Demetriou says. “Ultimately, we see AI as a way to help our team work more efficiently, while the strategy, creativity and narrative that drive successful campaigns remain firmly human.”

But the statements made in advertising and marketing materials aren’t held to the same standards as statements made in court before a judge, and law professionals are acutely aware of the potential liability risks of placing too much trust in AI.

“There have been several recent court cases where attorneys were sanctioned for misuse of AI – it was clear to the judges that legal arguments in the submissions were prepared by AI (and not verified by the lawyer) because the case law cited in the papers either did not stand for the premise cited or, worse, the case did not exist at all but instead were simply hallucinated by AI,” says Danielle Tricolla, a partner in the Litigation Practice Group at Forchelli Deegan Terrana LLP in Uniondale. “Courts have found this to be a violation of the attorney’s professional and ethical responsibilities.”

Still, the firm has integrated AIpowered systems for searching legal documents, finding it beneficial for parsing large troves of information as long as its inputs and scope are limited. “While we’re continuing to explore these tools, so far they have proven useful in giving attorneys a starting point for researching obscure issues, or finding relevant legal authority that may be difficult to locate using traditional research methods such as Boolean search terms,” Tricolla says.

In law, as in many industries, relying on AI for some tasks can have dire consequences, and Tricolla, speaking from personal experience, warns against allowing AI to punch above its weight. “I’ve had more than one client use AI to draft briefs or motion papers and then suggest that I utilize those briefs in litigation,” she says. “But, upon review, the AI’s legal analysis was totally off-base; it focused on irrelevant issues and facts – which is only natural since AI could not possibly know the complete background/history of the case and therefore could not possibly provide an appropriate analysis.”

Tricolla prevented potential legal missteps in that instance, but reiterated the importance of leaving law to the professionals. “AI is not a reliable replacement for an attorney.”