If you have experience with artificial intelligence and machine learning, there’s a decent chance an investment bank will have interest in your background. Firms are building AI applications to streamline trading, research, retail and even wealth management functions with their new fleet of robo-advisors. Yet a thorough mining of banks’ career sites suggests that firms like Goldman Sachs, Morgan Stanley and J.P. Morgan are putting plenty of focus on recruiting AI engineers for surveillance – of both customers and employees.
Goldman Sachs, for example, is looking for multiple AI software engineers – from analysts to executive directors – to join its surveillance analytics group (SAG), which sits in its compliance division. Candidates would build and maintain surveillance systems to spot potential money laundering, but also insider trading and market manipulation – two issues that have haunted banks in recent years. Interest rate manipulation scandals have costs banks billions in fines and have ended the careers of dozens of traders. Despite looking for multiple engineers, SAG is already the largest data consumer team at Goldman Sachs. Current members of the group include professors, research scientists and former traders. Goldman didn’t respond to a request for comment on the new postings.
Of course, using artificial intelligence as part of employee and client surveillance isn’t new. Credit Suisse partnered with CIA-backed Palantir in 2016 as part of a joint venture to better monitor its employees. (Bloomberg recently described Palantir as “an intelligence platform designed for the global War on Terror [that] was weaponized against ordinary Americans at home.) There is also Behavox, a startup founded by former Goldman Sachs research analyst Erkin Adylov that collates and analyses deviations from the norm, like employees accessing data at odd times or even the frequency of their bathroom trips. But now it appears big banks are doing more to build that technology in-house.
J.P. Morgan has been electronically monitoring its employees for eons. Earlier this year, it emerged that the bank also employed Palantir back in 2009, and that it hired a former U.S. Secret Service agent to run a threat group that uses algorithms to monitor the bank’s employees “to protect against perfidious traders and other miscreants.” The bank is currently on the lookout for a cyber security and technology controls analyst with a deep understanding of automation, machine learning and artificial intelligence. “Reporting and governance is also a key part of the role to ensure controls are being measured and monitored,” the posting reads.
JPM just hired Google’s Apoorv Saxena as its new head of artificial intelligence and machine-learning services. Saxena made a key hire from Facebook within two weeks of starting and is actively recruiting to build up his new team. Meanwhile, Morgan Stanley shows several new openings within its financial crimes technology unit, each of which includes machine learning and AI as a desired skill.
Finra recently published a new report on regulator technology, in which it identified five key applications for technologies including AI, natural language processing and big data. The first application on the list was internal surveillance and monitoring of traders, brokers and other employees.
“Market participants have indicated that they are investing significant resources in this area, primarily in tools that seek to utilize cloud computing, big data analytics or AI/machine learning to obtain more accurate alerts and enhance compliance and supervisory staff efficiencies,” the authors wrote. Current investments appear to be centered around human capital, rather than partnering with fintech startups – a trend that isn’t unique to employee surveillance tools. Big banks have begun shunning some vendors as they are bringing much of their technology development in-house.
Have a confidential story, tip, or comment you’d like to share? Contact: email@example.com
Bear with us if you leave a comment at the bottom of this article: all our comments are moderated by actual human beings. Sometimes these humans might be asleep, or away from their desks, so it may take a while for your comment to appear. Eventually it will – unless it’s offensive or libelous (in which case it won’t).