Brooke Anderson-Tompkins on the ‘lifescape’ of AI

Editor in Chief Sarah Wheeler sat down with Brooke Anderson-Tompkins to talk about responsible AI and the scope of implementing AI in your business. Anderson-Tompkins was president of 1st Priority Mortgage for 15 years and is a former chair of the Community Mortgage Lenders of America (now Community Home Lenders of America). She is now founder and CEO of Bridge AIvisory and will be a speaker at HousingWire’s AI Summit.

This interview has been edited for length and clarity.

Sarah Wheeler: You went from leading a mortgage company to starting a consulting company on artificial intelligence. What motivated you to jump into this new area?

Brooke Anderson-Tompkins: Jump is a great way to phrase it! The short answer to your question is that I was driven by a passion for innovation and a desire to leverage AI to create impactful solutions for the industry.

It absolutely wasn’t around the headline hype, but that informed me to the extent that as I looked at the things that I was passionate about, the opportunity to incorporate artificial intelligence in the mortgage ecosystem definitely had me leaning in. And the real possibility of creating efficiencies and reducing costs and maintaining what I refer to as the ‘heart of human’ definitely had my attention.

SW: How does your background inform how you approach clients of BridgeAIvisory?

BAT: Having spent nearly the last two decades in the real estate-affiliated space, I have first-hand reference to the drivers — beginning on the real estate side and then the cascade through mortgage and the collective core services. And then I was based out of New York, so I spent my fair share of time in the regulatory and compliance side and that translated over time to my spending many years in D.C. on the advocacy component, as well. And all of those things largely equate to business.

So many of the business components can stretch across business genres. So especially when it comes to AI, there are components there that are broad reaching, and can readily be applied probably 80% of the time to business in general. And remaining involved on the advocacy piece is a key component. We don’t want another Dodd-Frank and the associated cost implications that come along with it. The BridgeAIvisory approach is very similar in many respects, in that I don’t view AI as a magic bullet.

It does have great potential when strategically considered, implemented, trained and monitored — for whatever benchmarks or ROI — and those principles incorporated at the outset. It has an opportunity for far better results.

SW: What conversations are you having about AI right now?

BAT: It’s been interesting to me in the couple of months since I introduced Bridge AIvisory that the conversation begins just as the AI Summit is expected to start in a few weeks! It starts with level setting on the language of artificial intelligence. And I refer to it as “from the boardroom to the break room.” It’s not enough to have a session around the AI language, but then taking that language and incorporating it to build comprehensive strategy and identifying what value are you bringing to the table. And then that informs what is referred to as a clean sheet of paper process — a concept that Elizabeth Warren introduced to me a number of years ago.

And what I’ve learned is that the same words can have a variant meaning and a different context and still be accurate. And so identifying what those definitions are for the project at hand right out of the gate, and repeating them often, can be a key to successful execution, because language becomes part of the culture and culture is a key component to success.

SW: We’re excited you’re going to be speaking at our AI Summit on responsible AI. What does that term mean?

BAT: My response comes from the training that I received from the Mila AI Institute in Montreal. Mila is a globally recognized deep-learning research institution, founded by Yoshua Bengio, in 1993. Part of my premise here is that it’s really important to learn from experts.

There is not yet a globally recognized definition of responsible artificial intelligence. For BridgeAI, I’ve adopted the definition from Mila: “There is an approach whereby the lifecycle of an AI system must be designed to uphold, if not enhance, a set of foundational values and principles, including the internationally agreed upon human rights framework, as well as ethical principles.” And it continues by “referencing the importance of holistically and carefully thinking about the design of any AI system, regardless of its field of application or objective. It is therefore a collection of all choices, implicit or explicit, made in the design of the lifecycle of an AI system that make it either irresponsible or responsible.”

We’re so used to, “Ok, here’s the definition. Give me my task, let’s go.” But AI is a lifescape — it goes so far beyond business. We’re accustomed to something like Dodd-Frank, and that affected financial services. We honed in on that and went to task solving the problem. This is so much bigger than that.

So, I think that we need to be conscious as we create what the solutions are, to keep these things in mind. And ultimately, the good news is, is that if you look at that definition, the core principles are things that we’re all very familiar with: it’s ethics and values, transparency and explainability, accountability and governance. It’s safety and soundness, privacy and data protection, inclusivity and diversity and environmental sustainability. The good news is that we do that already.

However, I don’t think that we necessarily look at all of those pieces as we’re working on a given project. And that is part under the responsible AI piece of looking at those holistically as part of a project base.

This is part 1 of this interview. Look for part 2 next week.

Compare listings

Compare
en_USEnglish