Você está na página 1de 6

AI is hot, I mean really hot. VCs love it, pouring in over $1.

5B in just the first


half of this year. Consumer companies like Google and Facebook also love AI, with
notable apps like Newsfeed, Messenger, Google Photos, Gmail and Search leveraging
machine learning to improve their relevance. And it's now spreading into the
enterprise, with moves like Salesforce unveiling Einstein, Microsoft�s Cortana /
Azure ML, Oracle with Intelligent App Cloud, SAP�s Application Intelligence, and
Google with Tensorflow (and their TPUs).

As a founder of an emerging AI company in the enterprise space, I�ve been following


these recent moves by the big titans closely because they put us (as well as many
other ventures) in an interesting spot. How do we position ourselves and compete in
this environment?

In this post, I�ll share some of my thoughts and experiences around the whole
concept of AI-First, the �last mile� problems of AI that many companies ignore, the
overhype issue that�s facing our industry today (especially as larger players enter
the game), and my predictions for when we�ll reach mass AI adoption.

Defining AI-First vs. AI-Later

A few years ago, I wrote about the key tenants of building Predictive-First
applications, something that�s synonymous to the idea of AI-First, which Google is
pushing. A great example of Predictive-First is Pandora (disclosure: Infer
customer). Pandora didn't try to redo the music player UI -- there were many
services that did that, and arguably better. Instead, they focused on making their
service intelligent, by providing relevant recommendations. No need to build or
manage playlists. This key differentiation led to their rise in popularity, and
that differentiation depended on data intelligence that started on day one.
Predictive wasn't sprinkled on later (that's AI-Later, not AI-First, and there's a
big difference � keep reading).

If you are building an AI-First application, you need to follow the data � and you
need a lot of data � so you would likely gravitate towards integrating with big
platforms (as in big companies with customers) that have APIs to pull data from.

For example, a system like CRM.

There�s so much valuable data in a CRM system, but five years ago, pretty much no
one was applying machine learning to this data to improve sales. The data was, and
still is for many companies, untapped. There�s got to be more to CRM than basic
data entry and reporting, right? If we could apply machine learning, and if it
worked, it could drive more revenue for companies. Who would say no to this?

So naturally, we (Infer) went after CRM (Salesforce, Dynamics, SAP C4C), along with
the marketing automation platforms (Marketo, Eloqua, Pardot, HubSpot) and even
custom sales and marketing databases (via REST APIs). We helped usher a new
category around Predictive Sales and Marketing.

We can�t complain much � we've amassed the largest customer base in our space, and
have published dozens of case studies showcasing customers achieving results like
9x improvements in conversion rates and 12x ROI via vastly better qualification and
nurturing programs.

But it was hard to build our solutions, and remains hard to do so at scale. It's
not because the data science is hard (although that's an area we take pride in
going deep on), it's the end-to-end product and packaging that's really tough to
get right. We call this the last mile problem, and I believe this is an issue for
any AI product � whether in the enterprise or consumer space.
Now, with machine learning infrastructure in the open � with flowing (and free)
documentation, how-to guides, online courses, open source libraries, cloud
services, etc. � machine learning is being democratized.

Anyone can model data. Some do it better than others, especially those with more
infrastructure (for deep learning and huge data sets) and a better understanding of
the algorithms and the underlying data. You may occasionally get pretty close with
off-the-shelf approaches, but it�s almost always better to optimize for a
particular problem. By doing so, you�ll not only squeeze out better or slightly
better performance, but the understanding you gain from going deep will help you
generalize and handle new data inputs better � which is key for knowing how to
explain, fix, tweak and train the model over time to maintain or improve
performance.

But still, this isn't the hardest part. This is the sexy, fun part (well, for the
most part ... the data cleaning and matching may or may not be depending on who you
talk to :).

The hardest part is creating stickiness.

The Last Mile of AI

How do you get regular business users to depend on your predictions, even though
they won�t understand all of the science that went into calculating them? You want
them to trust the predictions, to understand how to best leverage them to drive
value, and to change their workflows to depend on them.

This is the last mile problem. It is a very hard problem � and it's a product
problem, not a data scientist problem. Having an army of data scientists isn't
going to make this problem better. In fact, it may make it worse, as data
scientists typically want to focus on modeling, which may lead to over-investing in
that aspect versus thinking about the end-to-end user experience.

To solve last mile problems, vendors need to successfully tackle three critical
components:

1) Getting �predictive everywhere� with integrations

It�s very important to understand where the user needs their predictions � and this
may not be in just one system, but many. We had to provide open APIs and build
direct integrations for Marketo, Eloqua, Salesforce, Microsoft Dynamics, HubSpot,
Pardot, Google Analytics and Microsoft Power BI.

Integrating into these systems is not fun. Each one has it own challenges: how to
push predictions into records without locking out users who are editing at the same
time; how to pull all the behavioral activity data out to determine when a prospect
will be ready to buy (without exceeding the API limits); how to populate
predictions across millions of records in minutes not hours; etc.

These are hard software and systems problems (99% perspiration). In fact, the
integration work likely consumed more time than our modeling work.

This is what it means to be truly �predictive everywhere.� Some companies like


Salesforce are touting this idea, but it's closed to their stack. For specific
solutions like predictive lead scoring, this falls apart quickly, because most mid-
market and enterprise companies run lead scoring in marketing automation systems
like Marketo, Eloqua and Hubspot.

Last mile here means you're investing more in integrating predictions into other
systems than in your own user experience or portal. You go to where the user
already is � that's how you get sticky � not by trying to create new behavior for
them to do on your own site (even if you can make your site look way prettier and
function better). What matters is stickiness. Period.

2) Building trust

Trust is paramount to achieving success with predictive solutions. It doesn�t


matter if your model works if the user doesn�t act on it or believe in it. A key
area to establish trust around is the data, and specifically the external data
(i.e. signals not in the CRM or marketing automation platforms � a big trick we
employ to improve our models and to de-noise dirty CRM data).

Sometimes, customers want external signals that aren�t just useful for improving
model performance. Signals like whether a business offers a Free Trial on their
website might also play an important operational role in helping a company take
different actions for specific types of leads or contacts. For example, with
profiling and predictive scoring solutions, they could filter and define a segment,
predict the winners from that group and prioritize personalized sales and marketing
programs to target those prospects.

In addition to exposing our tens of thousands of external signals, another way we


build trust is by making it easy and flexible to customize our solution to the
unique needs and expectations of each customer. Some companies may need multiple
models, by region / market / product line (when there is enough training data) or
�lenses� (essentially, normalizing another model that has more data) when there
isn't enough data. They then need a system that guides them on how to determine
those solutions and tradeoffs. Some companies care about the timing of deals; they
may have particular cycle times they want to optimize for or they may want their
predictions to bias towards higher deal size, higher LTV, etc.

Some customers want the models to update as they close more deals. This is known as
retraining the model, but over retraining could result in bad performance. For
example, say you're continuously and automatically retraining with every new
example, but the customer was in the middle of a messy data migration process. It
would have been better to wait until that migration completed to avoid incorrectly
skewing the model for that period of time. What you need is model monitoring, which
gauges live performance and notices dips or opportunities to improve performance
when there's new data. The platform then alerts the vendor and the customer, and
finally results in a proper retraining.

Additionally, keep in mind that not all predictions will be accurate, and the
customer will sometimes see these errors. It�s important to provide them with
options to report such feedback via an active process that actually results in
improvements in the models. Customers expect their vendor to be deep on details
like these. Remember, for many people AI still feels like voodoo, science fiction
and too blackbox-like (despite the industry�s best efforts to visualize and explain
models). Customers want transparent controls that support a variety of
configurations in order to believe, and thus, operationalize a machine-learned
model.

3) Making predictive disappear with proven use cases

Finally, let's talk about use cases and making predictive disappear in a product.
This is a crucial dimension and a clear sign of a mature AI-First company. There
are a lot of early startups selling AI as their product to business users. However,
most business users don�t want or should want AI � they want a solution to a
problem. AI is not a solution, but an optimization technique. At Infer, we support
three primary applications (or use cases) to help sales and marketing teams:
Qualification, Nurturing and Net New. We provide workflows that you can install in
your automation systems to leverage our predictive tech and make each of these use
cases more intelligent. However, we could position and sell these apps without even
mentioning the word predictive because it�s all about the business value.

In our space, most VPs of Sales or Marketing don�t have Ph.Ds in computer science
or statistics. They want more revenue, not a machine learning tutorial. Our pitch
then goes something like this �

�Here are three apps for driving more revenue. Here�s how each app looks in our
portal and here are the workflows in action in your automation systems � here are
the ROI visualizations for each app � let�s run through a bunch of customer
references and success studies for the apps that you care about. Oh, and our apps
happen to leverage a variety predictive models that we'll expose to you too if you
want to go deep on those.�

Predictive is core to the value but not what we lead with. Where we are different
is in the lengths we go to guide our customers with real-world playbooks, to
formulate and vet models that best serve their individual use cases, and to help
them establish sticky workflows that drive consistent success. We'll initially sell
customers one application, and hopefully, over time, the depth of our use cases
will impress them so much that we'll cross-sell them into all three apps. This
approach has been huge for us. It�s also been a major differentiator � we achieved
our best-ever competitive win rate this year (despite 2016 being the most
competitive) by talking less about predictive.

Vendors that are overdoing the predictive and AI talk are missing the point and
don't realize that data science is a behind-the-scenes optimization. Don�t get me
wrong, it's sexy tech, it's a fun category to be in (certainly helps with
engineering recruiting) and it makes for great marketing buzz, but that positioning
is not terribly helpful in the later stages of a deal or for driving customer
success.

The focus needs to be on the value. When I hear companies just talking about
predictive, and not about value or use cases / applications, I think they're
playing a dangerous game for themselves as well as for the market. It hurts them as
that's not something you can differentiate on any more (remember, anyone can
model). Sure, your model may be better, but the end buyer can't tell the difference
or may not be willing (or understand how) to run a rigorous evaluation to see those
differences.

The Overhype Issue

Vendors in our space often over-promise and under-deliver, resulting in many churn
cases, which, in turn, hurts the reputation of the predictive category overall. At
first, this was just a problem with the startups in our space, but now we�re seeing
it from the big companies as well. That's even more dangerous, as they have bigger
voice boxes and reach. It makes sense that the incumbents want to sprinkle AI-
powered features into their existing products in order to quickly impact thousands
of their customers. But with predictive, trust is paramount.

Historically, in the enterprise, the market has been accustomed to overhyped


products that don�t ship for years from their initial marketing debuts. However, in
this space, I�d argue that overhyping is the last thing you should do. You need to
build trust and success first. You need to under-promise and over-deliver.

Can the Giants Really Go Deep on AI?

The key is to hyper focus on one end-to-end use case and go deep to start, do that
well with a few customers, learn, repeat with more, and keep going. You can't just
usher out an AI solution to many business customers at once, although that
temptation is there for a bigger company. Why only release something to 5% of your
base when you can generate way more revenue if it�s rolled out to everyone? This
forces a big company to build a more simplified, "checkbox" predictive solution for
the sake of scale, but that won't work for mid-market and enterprise companies,
which need many more controls to address complex, but common, scenarios like
multiple markets and objective targets.

Such a simplified approach caters better to smaller customers that desire turnkey
products, but unlike non-predictive enterprise solutions, predictive solutions face
a big problem with smaller companies � a data limiting challenge. You need a lot of
data for AI, and most small businesses don't have enough transactions in their
databases to machine learn patterns from (I also would contend that most small
companies shouldn't be focusing on optimizing their sales and marketing functions
anyway, but rather on building a product and a team).

So, inherently, AI is biased towards mid-market / enterprise accounts, but their


demands are so particular that they need a deeper solution that's harder to
productize for thousands. Figuring out how to build such a scalable product is much
better done within a startup vs. in a big company, given the incredible focus and
patience that�s needed.

AI really does work for many applications, but more vendors need to get good at
solving the last mile � the 80% that depends less on AI and more on building the
vehicle that runs with AI. This is where emerging companies like Infer have an
advantage. We have the patience, focus, and depth to solve these last mile problems
end-to-end � and to do it in a manner that's open to every platform � not just
closed off to one company's ecosystem. This matters (especially with respect to the
sales and marketing space, in which almost every company runs a fragmented stack
with many vendors).

It's also much easier to solve these end-to-end problems without the legacy issues
of an industry giant. At Infer, we started out with AI from the very beginning (AI-
First), not AI-Later like most of these bigger companies. Many of them will
encounter challenges when it comes to processing data in a way that's amenable for
modeling, monitoring, etc. We�re already seeing these large vendors having to forge
big cloud partnerships to rehaul their backends in order to address their scaling
issues. I actually think some of the marketing automation companies still won't be
able to improve their scale, given how dependent they are on legacy backend design
that wasn't meant to handle expensive data mining workloads.

Many of these companies will also need to curtail security requirements stemming
from the days of moving companies over to the cloud. Some of their legacy security
provisions may prevent them from even looking at or analyzing a customer�s data
(which is obviously important for modeling).

When you solve one problem really well, the predictive piece almost disappears to
the end user (like with our three applications). That's the litmus test of a good
AI-powered business application. But, that's not what we're seeing from the big
companies and most startups. It's quite the opposite � in fact, we�re seeing more
over generalization.

They're making machine learning feel like AWS infrastructure. Just build a model in
their cloud and connect it somehow to your business database like CRM. After five
years of experience in this game, I'll bet our bank that approach won't result in
sticky adoption. Machine learning is not like AWS, which you can just spin up and
magically connect to some system. �It�s not commoditizable like EC2� (Prof. Manning
at Stanford). It's much more nuanced and personalized based on each use case. And
this approach doesn't address the last mile problems which are harder and typically
more expensive than the modeling part!

From AI Hype to Mass Adoption

There aren't yet thousands of companies running their growth with AI. It will take
time, just like it took Eloqua and Marketo time to build up the marketing
automation category. We�re grateful that the bigger companies like Microsoft,
Oracle, Salesforce, Adobe, IBM and SAP are helping market this industry better than
we could ever do.

I strongly believe every company will be using predictive to drive growth within
the next 10 years. It just doesn't make sense not to, when we can get a company up
and running in a week, show them the ROI value via simulations, and only then ask
them to pay for it. Additionally, there are a variety of lightweight ways to
leverage predictive for growth (such as powering key forecasting metrics and
dashboards) that don't require process changes if you're in the middle of org
changes or data migrations.

In an AI-First world, every business must ask the question: What if our competitor
is using predictive and achieving 3x better conversion rates as a result? The
solution is simple � adopt AI as well and prop up the arms race.

I encourage all emerging AI companies to remain heads down and focus on customer
success and last mile product problems. Go deep, iterate with a few companies and
grow the base wisely. Under-promise and over-deliver. Let the bigger companies pay
for your marketing with their big voice boxes which they're really flexing now.
Doing so, you'll likely succeed beyond measure � and who knows, we may even replace
the incumbents in the process.

Você também pode gostar