The AI Playbook

Chapter 4: Development


  • There are many ways your company can engage with AI. Use third party AI APIs; outsource; use a managed service; build an in-house team; or adopt a ‘hybrid’ approach combining an in-house team with third party resources.
  • Third party AI APIs fulfil specific functions to a moderate or high standard at low cost. Most solve problems in the domains of vision and language. Numerous APIs are available from Amazon, Google, IBM, Microsoft and also other smaller companies. Features vary; we provide a summary. APIs offer immediate results without upfront investment, at the expense of configurability and differentiation. Use an API if you seek a solution to a generic problem for which an API is available. APIs are unsuitable if you seek solutions to narrow, domain-specific problems, wish to configure your AI, or seek long-term differentiation through AI.
  • Managed services enable you to upload your data, configure and train models using a simple interface, and refine the results. Managed services abstract away much of the difficulty of developing AI and enable you to develop a custom solution rapidly. Managed services offer greater flexibility and control than APIs, but less flexibility than an in-house team, and also require you to transfer data to a third party and may create dependencies.
  • If a third-party solution is unavailable and an in-house team is too expensive, you can outsource your AI development. Whether outsourcing is appropriate will depend upon your domain, expertise, required time to value and data sensitivity. If outsourcing, specify desired frameworks and standards, who will provide training data, costs, timescales and deployment considerations. Outsource if you require trusted expertise quickly and a cheaper option than permanent employees. Avoid outsourcing if your data permissions prohibit it, you require domain or sector knowledge that an outsourcer lacks, or you wish to build knowledge within your own company.
  • An in-house AI team offers maximum control, capability and competitive differentiation – at a price. A small in- house team will cost at least £250,000 to £500,000 per year. A large team requires a multi-million-pound annual investment. To develop an in-house team your company must also: attract, manage and retain AI talent; select development frameworks and techniques; gather and cleanse data; learn how to productise AI into real-world systems; and comply with regulatory and ethical standards. Build an in-house team if you have a problem that cannot be solved with existing solutions, seek differentiation in the market, or seek to maintain control over your data.
  • A ‘hybrid’ approach is ideal for many companies. Plan for an in-house team that will address your requirements to a high standard over time, but use third party APIs to solve an initial, simpler version of your challenge. A hybrid approach can be attractive if you seek rapid initial results, wish to limit spend until a business case is proven and want greater differentiation and resilience over time.
  • To develop AI yourself you have choices to make regarding your AI ‘technology stack’. The stack comprises six layers: hardware; operating systems; programming languages; libraries; frameworks; and abstractions. Not all problems require the full stack.
  • Ensure your team has hardware with graphical processing units (GPUs) that support NVIDIA’s CUDA libraries. Laptops with high performance graphics cards offer flexibility. For greater power, desktop machines with powerful GPUs are preferable. To train large models, use dedicated servers. Cloud-based servers offered by Amazon, Google or Microsoft are suitable for most early stage companies.
  • Apply AI techniques suited to your problem domain. For assignment problems consider: Support Vector Classification; Naïve Bayes; K-Nearest Neighbour Classification; Convolutional Neural Networks; Support Vector Regression; or ‘Lasso’ techniques. We describe each and explain their advantages and limitations. For grouping problems, explore: Meanshift Clustering; K-Means; and Gaussian Mixture Models. For generation, consider: Probabilistic Prediction; Variational Auto-Encoders; and Generative Adversarial Networks.

Development: The Checklist

Create a development strategy

  • Review the advantages and limitations of different development strategies.
  • For your AI initiatives, assess the relative importance to your company of time to value, capability, cost, differentiation, resilience and the development of in-house expertise.
  • Determine the availability of APIs that address your requirements.
  • Assess whether your data permissioning allows use of third party services.
  • Validate that your chosen development strategy offers the trade-offs, integrations with existing systems and resilience your organisation requires.
  • If developing an in-house team, review best practices with regard to strategy, people, data, development, production and regulation (Chapters 1 to 6).

Optimise system development

  • Ensure your team has appropriate hardware for rapid iteration and review ongoing hardware requirements.
  • Match the language you use with the rest of your production activity for simplicity and speed.
  • Understand techniques appropriate for your problem domain (generation, assignment, grouping or forecasting).
  • Experiment with multiple techniques to validate your challenge and highlight characteristics and limitations of your data.
  • Select a technique that offers the combination of accuracy, development speed and runtime efficiency you require.
  • Maintain awareness of alternative techniques and the pace of their improvement.
  • Select frameworks and libraries to accelerate development based upon your requirements for ease of use, development speed, size and speed of solution and level of abstraction and control.

You may not require a large, in-house team to develop AI. There are many ways to engage with AI including third party AI APIs, outsourcing, managed services, creating an in-house AI team, or a ‘hybrid’ approach that combines an in-house team with third party resources. Extensive AI development, however, requires knowledge of AI hardware, development frameworks and techniques. Below, we provide a blueprint for AI development.

We begin by describing the advantages and disadvantages of different development strategies, so you can identify the ideal approach for your company.

The purpose and characteristics of AI frameworks (such as TensorFlow and PyTorch) and popular AI techniques (such as Support Vector Machines and Naïve Bayes) can be confusing. To catalyse your experimentation with AI, we then highlight and explain the AI frameworks and techniques best suited to solve a range of problems.

APIs offer specific functionality fast

You may be able to solve the problem you have identified by using an AI application programming interface (API) from a third party. These services fulfil specific, limited functions to a moderate or high standard at low cost. API calls can process your data and provide an immediate result.

Most AI APIs solve problems in the domains of vision and language. Language APIs include transcription, translation and topic extraction. Vision APIs include object recognition, scene detection and logo identification. Numerous AI APIs are available from Amazon, Google, IBM and Microsoft. Features vary (Fig. 13-15) and are advancing rapidly.

Fig. 13. Image Analysis APIs offer varying features
Amazon Microsoft Google IBM
Object detection
Scene detection
Face detection
Face recognition (human face identification)
Facial analysis
Inappropriate content detection
Celebrity recognition
Text recognition
Written text recognition
Search for similar images on web
Logo detection
Landmark detection
Food recognition
Dominant colours detection

Source: Altexsoft

Fig. 14. Video APIs offer varying features
Amazon Microsoft Google
Object detection
Scene detection
Activity detection
Facial recognition
Facial and sentiment analysis
Inappropriate content detection
Celebrity recognition
Text recognition
Person tracking on videos
Audio transcription
Speaker indexing
Keyframe extraction
Video translation 9 languages
Keywords extraction
Brand recognition
Dominant colour detection
Real-time analysis

Source: Altexsoft. Check with Amazon, Microsoft and Google to see their latest features beyond those shown above.

Fig. 15. Speech and text APIs offer varying features
Amazon Microsoft Google IBM
Speech recognition (speech into text)
Text into speech conversion
Entities extraction
Key phrase extraction
Language recognition 100+ languages 120 languages 120+ languages 60+ languages
Topics extraction
Spell check
Voice verification
Intention analysis
Metadata extraction
Relations analysis
Sentiment analysis
Personality analysis
Syntax analysis
Tagging parts of speech
Filtering inappropriate content
Low-quality audio handling
Translation 6 languages 60+ languages 100+ languages 21 languages
Chatbot toolset

Source: Altexsoft. Check with Amazon, Microsoft, Google and IBM to see their latest features beyond those shown above.

Transferring large volumes of data can become expensive. If you are using Amazon, Google, IBM or Microsoft for other aspects of your platform, and your platform provider’s APIs fulfil your requirements, your existing vendor may be an attractive option.

Many other companies, however, offer high-quality APIs in the fields of vision, language and forecasting (Fig. 16). Access a fuller list of nearly 200 APIs at (source: Programmable Web).

Fig. 16. Many additional companies provide AI APIs
Category Company Website
VISION Clarifai
Infinite Loop
Prisma Labs
Indata Labs
Meaning Cloud
Spot Intelligence
Automated Insights
Infosys Nia

Source: MMC Ventures

APIs offer immediate, useful results at the expense of niche functionality and differentiation. APIs deliver:
  • Time-to-value: APIs provide immediate capability. By calling an API, your company can make immediate use of functions ranging from language translation to object recognition.
  • Low initial cost: While extensive use can become expensive, APIs can cost as little as tens or hundreds of pounds to use – making AI accessible to companies of all sizes and organisations that seek proof of value before committing to greater budgets.
  • Quality: Large companies, including Google and Microsoft, have invested billions of pounds in their AI services. Many are highly capable.
  • Ease of use: AI APIs are accessible to developers without expertise in AI. Companies without knowledge of AI can immediately take advantages of AI via AI APIs.
Limitations of APIs include:
  • Functionality: APIs offer specific functionality, often in the fields of vision and language. If your requirements fall outside of what is available, an alternative approach will be required.
  • Configurability: APIs do not allow you to adjust the training data or models on which the services are based. If you wish to develop services based on unique training data you have, or tune underlying algorithms for improved results, APIs will be unsuitable.
  • Genericness: APIs are designed for mass adoption; they tend to be generic and lack depth and domain specificity. Object recognition APIs can actually tell the difference between BMWs and Skodas but are unlikely to be able to tell the difference between a BMW 6 Series and 7 Series.
  • Commoditisation: The APIs you use are available to your competitors. It will be challenging to create lasting competitive advantage, and associated market value, through use of third party APIs.
  • Lifetime cost: Extensive use of APIs can attract a high cost relative to an in-house solution you own. Dependence: Large vendors have, on occasion, discontinued APIs. Smaller vendors can be acquired or cease to operate. Using third party APIs creates a dependency over which you have no control.
  • Privacy: Using APIs involvespassing your data to third parties. Does this comply with your data permissions? Does the third party retain a copy of your data or use it for any other purpose?

Overall, APIs are ideal if you seek an immediate, low cost solution to a generic problem. APIs will be insufficient, however, if you have a niche challenge, seek greater control and configurability, or seek long-term differentiation through AI (Fig. 17).

Fig. 17. APIs offer immediate results at the expense of differentiation
Use APIs if you:
  • Seek a solution to a generic problem for which a relevant API is available
  • Have limited budget
  • Require immediate initial results
  • Have limited in-house AI knowledge and resources.
Avoid APIs if you:
  • Seek a solution to a domain-specific or niche problem for which an API is unavailable
  • Have unique training data, or wish to control and configure your AI, for improved results
  • Seek long-term differentiation through AI
  • Do not wish to rely on third parties
  • Have data permissions that prevent you passing data to third parties.

Source: MMC Ventures

Many companies adopt a ‘hybrid’ approach (page 54), using APIs for rapid proofs-of-concept while transitioning to an in-house team that can deliver improved, differentiated, domain-specific capabilities over time.

“APIs can cost as little as tens or hundreds of pounds to use – making AI accessible to companies of all sizes and organisations that seek proof of value before committing to greater budgets.”

Managed services offer increased capability at low cost

Several vendors offer managed AI services. A step beyond pre-tuned, function-specific APIs, managed services enable you to upload your data, configure and train your own AI models using a simple interface, and refine the results. These services abstract away much of the difficult of developing AI and enable you to develop a custom solution rapidly, via a simplified interface and limited coding.

Peak, a leading managed AI service company in which we have invested, offers an effective solution. Solutions are also available from Amazon (SageMaker), Google (AutoML), IBM (Watson), Microsoft (Azure) and Salesforce.

Managed services have several advantages:
  • Capability: greater flexibility and control than simple APIs; managed services enable you to develop custom models and, potentially, bespoke IP.
  • Cost: cheaper than building an in-house AI team or outsourcing development.
  • Speed: faster time-to-value than building an in-house AI team.
Limitations include:
  • Control: less control than in-house development; access to underlying models will be limited, reducing customisation and tuning.
  • Permissioning: you must be comfortable transferring your data to a third party.
  • Reliance: it may be expensive or unappealing to migrate away from a managed service provider, given dependencies and data transfer costs.
  • Intellectual Property: Some vendors retain your data to improve algorithms for all; in other cases, pragmatically or contractually ownership of the model you develop may be limited.

If basic APIs will not satisfy your requirements, managed AI services offer a fast, flexible, way to develop bespoke solutions at a lower cost than building an in-house team. Managed services are also ideal for prototyping. If you require more control, flexibility, autonomy and ownership in time, however, significant re-development may be required.

Fig. 18. Managed services offer speed at the expense of control
Use managed services if:
  • Your challenge is a solved problem but your data is key
  • You wish to begin quickly
  • Cost is a challenge.
Avoid managed services if:
  • Your data permissions prohibit this approach
  • You require extensive control and flexibility
  • Speed of response is critical
  • Your problem has unique demands.

Source: MMC Ventures

Outsourcing offers expertise for moderate initial investment

If a suitable API or third party product is unavailable, you will need to build an AI solution. However, investing in an in-house team is expensive – at least £500,000 per year, typically, even for a small team. There are cost-effective alternatives. Several companies provide outsourced AI capabilities, ranging from contractor developers, who work with your own engineers, to complete, outsourced AI development.

The nature of AI development enables reseachers to work on multiple workstreams simultaneously, so outsourcing can be cheaper than maintaining a permanent team. Conversely, transferring large volumes of data securely and frequently, and retraining models on an ongoing basis, can become expensive. Whether outsourcing is appropriate will depend upon a range of considerations including:

  • Domain: will a third party offer the expertise you require in your problem domain and sector?
  • Expertise: to what extent do you wish to build expertise in-house?
  • Speed: do you require trusted expertise more rapidly than you can develop it in-house? Do you require a solution more quickly than you could build in-house?
  • Data sensitivity: do you have permission to pass data to third parties?
  • Operation: if an outsourcer builds your models, are you entitled to deploy them on your own infrastructure – or are you tied to your outsourcer on an ongoing basis?

Overall if maximising speed and minimising initial costs are your highest priorities, and APIs are unavailable, consider outsourcing (Fig. 19).

If outsourcing, specify:
  • Frameworks: is there a specific AI framework you require the outsourcer to use?
  • Standards: what accuracy (precision and recall – see Chapter 5) must models meet?
  • Data: will you provide cleaned, labelled training data? Or is data to be created by the outsourcer?
  • Costs: what costs have been agreed?
  • Timescales: what timescales must be met? This can be more challenging than for traditional software development because improving a model may require experimentation.
  • Deployment: how production-ready must the output be?
Fig. 19. Outsourcing offers speed at the expense of in- house knowledge
Use outsourcing if you:
  • Require trusted expertise quickly
  • Have clarity regarding the solution you require
  • Require a cheaper alternative to permanent employees.
Avoid outsourcing if you:
  • Have data permissions that prohibit outsourcing
  • Require knowledge regarding your problem domain or sector that an outsourcer cannot offer
  • Wish to build knowledge within your company.

Source: MMC Ventures

“If maximising speed and minimising initial costs are your highest priorities, and APIs are unavailable, consider outsourcing.”

An in-house team offers differentiation – at a price

Investing in an in-house AI team offers maximum control, capability and competitive differentiation – at a price.

An AI team of your own can deliver:
  • Flexibility: Control over the hardware, programming languages, frameworks, techniques and data you employ offers the flexibility to iterate and expand your solutions as your needs evolve.
  • Capability: APIs offer defined functionality. Managed service environments limit your ability to tune underlying algorithms. Outsourced talent will lack your team’s domain expertise. With an in-house team you have the opportunity to create optimised solutions, potentially beyond the current state of the art.
  • Differentiation: An in-house team can develop a unique AI offering that delivers competitive advantage, credibility in the market and associated value for your company.
  • Resilience: Without reliance on third party APIs or outsourcers, your AI initiatives can enjoy greater resilience and longevity.
  • Security: Retain control over your own data; none of it needs to be passed to third parties.
Drawbacks of an in-house team include:
  • Cost: A small in-house team, comprising two to four people and the hardare they require, will cost at least £250,000 to £500,000 per year – potentially more to productise the resulting system. A large team, recruited to solve problems at the edge of research, will require a multi-million pound annual investment in personnel and hardware.
  • Complexity: To develop an in-house AI team you must attract, structure, manage and retain AI talent; select the development languages, frameworks and techniques you will employ; undertake data gathering and cleansing; learn how to productise AI into real-world systems; and ensure compliance with regulatory and ethical standards.
  • Speed: It will require months to build a productive in-house AI team, and potentially longer to collect the data you require and develop customised solutions that deliver results to the standard you require.

An in-house team may be necessaryif your challenge cannot be solved with existing AI techniques and solutions, if you face significant restrictions on your ability to pass data to third parties, or if you seek competitive differentiation through AI. Before incurring the cost and complexity of an AI team, however, explore whether alternative methods can deliver your requirements faster and for a lower cost (Fig. 20). A hybrid strategy, described below, may be ideal.

To develop an in-house AI team, review all chapters of this Playbook for best practices regarding strategy, talent, data, development, production and regulation & ethics.

Fig. 20. An in-house team offers differentiation – at a price
Use an in-house team if you:
  • Have a niche problem that cannot be solved with existing solutions or techniques
  • Seek differentiation in the market and associated value
  • Wish to retain control over your own data.
Avoid an in-house team if you:
  • Have a simple problem for which solutions are available
  • Require an initial solution quickly
  • Have a modest budget.

Source: MMC Ventures

A hybrid approach can offer the ‘best of both worlds’

For many companies, a ‘hybrid’ approach to AI is ideal. Plan for an in-house team that will address your requirements to a high standard over time, but use third party APIs (or even a non-AI solution) to solve an initial, simpler version of your challenge.

A hybrid approach may enable you to prove the viability or value of your idea and justify in-house spend. It can also serve as a cost-effective way to identify the aspects of your challenge can be readily addressed and those that will require bespoke work.

A hybrid strategy offers a rapid, low cost start that suits many companies (Fig. 21). Initial investment in hardware, team and software can be minimal. Many APIs offer free trial periods in which you can assess scope for results. Even if your data restrictions prohibit use of third party APIs, you can adopt a hybrid approach with in-house developers using pre-trained AIs. Further, many academic papers and coding competition entries have code uploaded to GitHub and many have unrestricted licenses.

If you adopt a hybrid approach, develop a data strategy (Chapter 1) and pipeline of training data upfront. You can continue to use third-party APIs if they fulfil your needs unless costs become prohibitive, you wish to build resilience, or you seek improved results and differentiation with your own technology. As you gather additional data, you can create more accurate and complex models in-house, as needed and when the business case has been proven.

While the risk of interruption to services from Amazon, Google, IBM and Microsoft is low, vendors do occasionally remove APIs. Smaller vendors offering APIs may be acquired, or their services changed or discontinued. If you adopt a hybrid approach, develop a strategy for resilience. Once elements of your product are in place, examine the pre-trained models and consider moving these services in-house if you can achieve results comparable with the API. You may be able to use your chosen APIs in perpetuity and continually develop niche AI to complement these – a popular approach.

“A hybrid approach gives me flexibility. I don’t need to reinvent the wheel and can focus on doing very specific tasks better than anyone else in the world.”

Dr Janet BastimanStoryStream
Fig. 21. A hybrid approach can offer the ‘best of both worlds’
Use a hybrid approach if you:
  • Require rapid initial results
  • Wish to limit spend until a business case is proven
  • Have an evolving problem and desire for greater differentiation and resilience over time.
Avoid a hybrid approach if you:
  • Have a generic problem solved with existing APIs
  • Have a complex problem, to which a simple solution will cause more harm than no solution
  • Have data permission challenges that prevent use of APIs.

Source: MMC Ventures

To develop AI, optimise your technology stack

To develop AI – via a managed service provider, outsourcer or in-house team – you have choices to make regarding your AI technology stack. The stack comprises six layers: hardware; operating systems; programming languages; libraries; frameworks and abstractions (Fig. 22).

We offer hardware recommendations overleaf. The problem domain you are tackling (assignment, grouping, generation or forecasting) will then favour particular machine learning techniques and associated libraries and frameworks. Select components for your development stack accordingly.

The degree of abstraction you select will depend upon the skill of your development team, the speed of development you require and the degree of control you seek over the models you develop. Greater abstraction offers faster development and requires less skill, but limits your ability to tune models to your requirements. The size and speed of your models may also be limited.

Not all problems require the full stack; some solutions can be achieved rapidly, without frameworks or abstractions.

Fig. 22. The six layers of the AI technology stack

Abstractions (e.g. Keras, Digits)
Frameworks (e.g. TensorFlow, PyTorch)
Libraries (e.g. NumPy, Pandas)
Languages (e.g. Python, R)
Operating System/CUDA (e.g. Linux, Windows)
Hardware (e.g. GPUs, CPUs)

Source: MMC Ventures

For effective R&D, use appropriate hardware

Research and development requires hardware. To process models quickly, ensure your team has hardware with graphical processing units (GPUs) that support NVIDIA’s Compute Unified Device Architecture (CUDA) libraries. These allow your AI programmes to use the specialised mathematics of the GPUs and run at least ten times faster than on a CPU. For many, a laptop with a high performance graphics card is ideal. Current, potentially suitable cards include the NVIDIA GTX 1050 Ti, 1070, 1080 and the RTX 2080.

For greater power, desktop machines with more powerful GPUs are preferable – but at the expense of flexibility for your team. If you are a multi-premise team, or expect your personnel to travel, your team may expect a laptop in addition to a desktop machine you provide for research.

For large models, or rapid training, you will require dedicated servers. Buying and hosting servers yourself, either on-premise or in a data centre, is the most cost-effective over the long term but requires considerable upfront capital expenditure. The majority of early stage companies will find it more appropriate to use cloud-based servers offered by large providers including Google, Amazon and Microsoft. All offer GPU servers, costed according to usage time. Using the cloud providers, at least at the early stages of your AI initiatives, will enable you to control costs more effectively and push the decision regarding buying hardware to later in the process when you have a minimum viable product.

Apply AI techniques suited to the problem domain

For each problem domain (assignment, grouping, generation and forecasting – see Chapter 1, ‘Strategy’) – there are numerous established machine learning techniques.

Techniques vary in their data requirements, training dynamics, deployment characteristics, advantages and limitations. While deep learning methods are powerful, other techniques may be sufficient or better suited. Experiment with multiple techniques. Below, we highlight techniques popular for each domain.

For assignment problems consider SVCs, Bayes, KNNs and CNNs

Classification problems, which offer a defined, correct output to ease development, are frequently an attractive starting point for AI. While convolutional neural networks became popular, in part, due to their efficacy in solving classification problems there are many alternative techniques you can apply – many of which offer effective results and are quicker to implement.

Fig. 23. For assignment problems consider SVCs, Bayes, KNNs and CNNs
Technique Approach Advantages Challenges
Support Vector Classification (SVC) “SVC is effective when classifying images or text and you have fewer than 10,000 examples. Plot data in multi-dimensional space, based upon the number of variables in each example, and the SVC algorithm will determine the boundaries of each class (Fig. 24). New examples are classified based upon their relationship to the calculated boundaries. “ Effective when there are many variables.


Prone to overfitting.

Cannot directly provide probability estimates to evaluate results.

Naïve Bayes Naïve Bayes assumes that variables are independent and is particularly effective for text classification. Classifications are developed based upon the probability of each variable being contained within a specific class. Probabilities are then combined to provide an overall prediction. Fast to train and run.

Effective for text and variables.

Highly sensitive to training data.

Probability for classifications is unreliable.

K–Nearest Neighbours Classification (KNN) “KNN is a productive statistical technique when you possess a complete data set. All training data is mapped into vectors, from an origin based on the variables in the data. Each point in space is assigned a label. New data is then classified by mapping it to the same space and returning the label of the closest existing datapoints (Fig. 26). “ Effective when boundaries between classes are poorly defined. All data must be stored in memory for classification; predictions require additional resources and time.
Convolutional Neural Networks (CNNs) “CNNs comprise multiple layers of neurons. Data passing through the network is transformed, by examining overlaps between neighbouring regions to create areas of interest. The final layer of the network is then mapped to target classes. “ Excels with complex data and multiple output classes. Compuationally expensive.

Slow to train.

Source: MMC Ventures

Fig. 24. SVCs maximise the boundaries between classes

Source: Haydar Ali Ismail, (

Fig. 25. Naïve Bayes classifies based on the probability of a variable being contained in a class

Source: Rajeev D. S. Raizada, Yune-Sang Lee ( pone.0069566)

Fig. 26. KNNs return the label of the closest datapoint

Source: Savan Patel (

“While convolutional neural networks are popular, there are many alternative techniques you can apply – many of which are quicker to implement.”

Regression problems quantify the extent to which a feature exists. Because they are also assignment problems, the techniques used for assignment frequently overlap with those used for regression.

Fig. 27. For regression problems, explore SVRs, Lasso and CNNs
Technique Approach Advantages Challenges
Support Vector Regression (SVR) “SVR is similar to SVC; training data plotted in multi-dimensional space. However, unlike SVC (where hyperplanes are generated to maximise distance from the data), with SVR hyperplanes are matched as closely as possible to the data. “ Effective with large numbers of variables.


Can extrapolate for new data.

Prone to overfitting.

The prediction is provided without confidence in its correctness; confidence must be determined through indirect methods.

Least Absolute Shrinkage and Selection Operator (Lasso) “Lasso minimises the number of variables used to make a prediction. If there are multiple, correlated variables Lasso will select one at random. “ Fast predictions.

Well suited to situations in which few variables are important for a prediction.

Minimising input variables may cause overfitting to training data.

Selected variables may oversimplify the problem.

Convolutional Neural Networks (CNNs) “CNNs can also be used for regression assignment tasks. Unlike when used for classification, the CNN provides a single neuron, with the prediction value as an output. “ Effective for complex problems. Difficult to determine which inputs contribute to a prediction.

Difficult to determine confidence in the prediction.

Source: MMC Ventures

For grouping explore Meanshift Clustering, K-Means and GMMs

If you have unlabelled data and seek to cluster it into similar groups, you will require techniques that expose similarity. Definining similarity can be challenging when there are many dimensions to the data.

Fig. 28. For grouping explore Meanshift Clustering, K-Means and GMMs
Technique Approach Advantages Challenges
Meanshift Clustering “Meanshift clustering discovers groups within a data set by selecting candidates for the centre of a group from the arithmetic mean of the datapoints in the region. The process continues until there are a distinct set of groups, each with a centre marker (Fig. 29). “ You do not need to know in advance how many clusters you expect. The algorithm’s scalability is limited due to the number of calculations between neighbours in each iteration.
K-Means (Lloyd’s algorithm) K-Means groups data into a pre-defined number of clusters of equal variance (data spread within the group). Scalable to large data sets. Defining the number of clusters in advance can be difficult because it requires some knowledge of the probable answers. If data is irregularly shaped, when plotting in multi-dimensional space the algorithm can become confused and suggest peculiar distributions.
Gaussian Mixture Models (GMMs) GMMs can offer more flexibility than K-Means. Instead of assuming that points are clustered around the mean of each group, GMMs assume a Gaussian distribution and can offer ellipse shapes (Fig. 30). Because they draw upon probabilities, GMMs can label datapoints as belonging to multiple classes – which may be valuable for edge cases. If the Gaussian distribution assumption is invalid, the clustering may perform poorly with real data.

Source: MMC Ventures

Fig. 29. Meanshift Clustering produces distinct groups with centre markers

Source: Miroslav Radojević (

Fig. 30. GMMs offer elliptical groupings instead of assuming points are clustered round a mean

Source: John McGonagle, Geoff Pilling, Vincent Tembo (

For generation, VAEs and GANs can be effective

Since its inception, AI has been used to synthesise text; MIT’s ELIZA natural language processing programme, created from 1964 to 1966, offered the illusion of understanding in psychology and other domains. In the decades since, the quality of generation techniques has been transformed – particularly following the introduction of Generative Adversarial Networks (GANs) – while domains of application have broadened to include visual imagery and sound.

Fig. 31. For generation, VAEs and GANs can be effective
Technique Approach Advantages Challenges
Pattern matching Pattern matching is among the most nai_ve of techniques but offers the illusion of intelligence in text generation. Using a dictionary of phrases and key words to recognise input statements, it is possible to create moderately effective responses with little effort. Useful for repetitive situations that may be fully mapped – such as sports reporting or basic customer support. Rapidly becomes nonsensical when inputs are outside a predefined area.
Probabilistic prediction Probabilistic prediction can be effective for text generation. Given a word or words from a sentence, probabilistic models determine a word or phrase to follow and recommend the text with the highest probability. Improve quickly with use. Addresses a set of problems limited in scope.
Variational Auto- Encoders (VAEs) VAEs train from real-world data. VAEs use a convolutional neural network to encode data into a vector and a second network to deconvolve the vector back to the original image (Fig. 32). After training the network, varying the input vector will provide realistic outputs. Compare the output directly to the original. The likelihood of a realistic output decreases if the difference between the original data vector and new input vector becomes too great.

Image outputs can be blurry.

Generative Adversarial Networks (GANs) Generative Adversarial Networks (GANs) comprise a generator network such as DCGAN (Deep Convolutional GAN) and a discriminator network (a standard classification CNN) (Fig. 33). The generator attempts to create an output that will fool the discriminator, while the discriminator becomes increasingly sophisticated at identifying outputs that are unreal. With sufficient training, the generator network learns to create images or text that are indistinguishable from real examples. Create realistic outputs from random input noise. Cannot generate outputs with specific features unless the GAN searches the entire input space. Random inputs give random (although realistic) outputs; you cannot force a specific output condition.

The discriminator identifies only real images and fakes, not whether the output includes elements of interest.

The more complex the image or text being created, the harder to create realistic output.

Current research centres on splitting the challenge into multiple generative steps.

Source: MMC Ventures

Fig. 32. VAEs encode images into a vector and add noise before regenerating

Source: Kevin Frans (

Fig. 33. With one network, GANs generate output from random noise; a second network serves as a discriminator

Source: Thalles Silva (

For forecasting, causal models and HMMs are popular

Applying techniques to predict the future is challenging; forecasts may be affected by variables outside the data available to you. While the past is not always indicative of the future, AI forecasting techniques are effective when there are causal or periodic effects. Understanding the volume of data you require may need initial knowledge of causal and periodic effects, in the absence of which your model may miss these relations.

“While the past is not always indicative of the future, AI forecasting techniques are effective when there are causal or periodic effects.”

Fig. 34. For forecasting problems, experiment with causal models, HMMs and ARMA
Technique Approach Advantages Challenges
Causal models A sub-class of assignment problem, causal models can use the same techniques – with the additional consideration of variables’ rate of change – to predict new values. Straightforward to implement. Consider a point in time; may fail to take into account longer- term trends.
Hidden Markov Models (HMMs) Markov models provide a sequence of events based upon the previous time step. HMMs assume that predictions of the future can be based solely upon the present state; further history is irrelevant. Well suited to learning and predicting sequeneces within data based upon probability distributions. Challenging to train. Rapidly become inaccurate if sequences change.
Auto-Regression Moving Average (ARMA) Despite dating from the 1950s, ARMA remains useful. ARMA considers past values and uses regression to model and predict a new value, while a moving average calculates the error. A further algorithm determine the best fit for future predictions. Considers past values and prediction error, offering greater adaption than HMMs. Can oversimplify problems that have complex periodicity, or randomness, in the time series.

Source: MMC Ventures

Use frameworks to accelerate development

If your team is developing an AI solution, use libraries to accelerate development. The algorithms described above have been coded into efficient libraries for Python and R. Implementing an algorithm directly in Python will be slower – in some cases 60 times slower (Fig. 35).

Fig. 35. Libraries offer improved performance
Implementation Run time
Pure Python (with list comprehensions)

TensorFlow on CPU


18.65 seconds

1.20 seconds

0.32 seconds

Run time for a linear regression problem implemented in pure Python, using TensorFlow (on CPU for comparability) and using in-built functions in NumPy (a Python library for numerical analysis).

Source: (Renato Candido)

For numerical analysis, NumPy is a library of choice

There are many libraries available to support numerical analysis. The most popular include:

  • NumPy: A library of choice for numerical analysis in Python. Functions are optimised in C so run quickly, matrices and vectors are well handled, and there are many in-built statistical algorithms to support AI.
  • Scikit-learn: Released in 2010 as a lightweight library for deep learning, Scikit-learn is built on NumPy and offers considerable overlap, although the two complement each other well.
  • Matplotlib: Predominantly a plotting library, to support visual analysis of plots Matplotlib requires its own numerical calculations. These are limited and further libraries are required for broader analytical techniques.
  • R packages are not as extensive as those for Python but there are many for numerical analysis on top of core R functions – including caret, gimnet, randomForest and nmle.

In addition to libraries there are specific applications, such as Matlab and Mathematica, which offer extensive functions. While popular in academic settings, they are rarely used in industry given the high cost of software licenses compared with the free libraries available.

For deep learning, TensorFlow and Caffe are popular frameworks

Deep learning frameworks are typically more extensive than numerical analysis libraries and serve as ‘scaffolding’ for your projects. Your choice of framework will impact speed of development as well as the features and scalability of your solution.

With numerous frameworks available, take time to evaluate your project priorities and the framework best suited to your goals. The most popular framework may not be optimal for your initiative. When selecting a framework consider its advantages and limitations, the skills it requires, availability of skills, and scaling and speed requirements (both for development and production).

Unless you are using a pre-trained network, if you have implemented models in a single framework then reimplementing them in another will involve retraining from scratch. You may elect to use multiple frameworks for different problems, particularly if doing so allows consistency with existing development languages.

Frameworks evolve at different speeds and, particularly when maintained by a single business or university, may be discontinued with limited notice. In a rapidly evolving field, frameworks with high levels of community support can be attractive.

Fig. 36. Different deep learning frameworks offer advantages and challenges
Framework Features Maintained by Community Support Availability of Talent Advantages Challenges
TensorFlow One of the most widely used frameworks, TensorFlow is implemented as a Python library, enabling rapid development of a wide variety of projects.

There are many example projects for TensorFlow, and numerous code samples (available with an open source license) for different classes of problem that can be adapted rapidly for your own tasks.

Google High High Numerous example projects are available.

Becoming a standard as many training courses use TensorFlow.

Allows lower-level data manipulation for tuning.

Significant computational overhead.

Less efficient than numerical libraries for certain calculations.

Challenging to optimise.

Caffe/Caffe2 Caffe is one of the earlier frameworks implemented in C++ with a Python interface. Originally designed for convolutional neural networks, Caffe grew to support feed-forward networks.

Facebook recently introduced Caffe2 which is built for mobile, includes pre-trained models, and is likely to be merged with PyTorch (also from Facebook).

Berkeley Vision (Caffe)

Facebook (Caffe2)

Medium Medium Widely used in the academic community. Challenging to compile.

Limited support.

Theano Among the oldest deep learning libraries, Theano is more mathematical than many and positioned between analytical libraries such as NumPy and abstract frameworks such as TensorFlow.

Much academic research was undertaken in Theano, with Python, and many early adopters of AI employed it.

University of Montreal Low Medium Efficient and scalable.

Straightforward to implement new algorithms.

Used for many early machine learning courses.

No longer developed or supported.

Developers with experience may advocate for its use.

MXNet MXNet supports a wide range of programming languages including C++, R, Python and Javascript, and is maintained by the open source community. Apache Medium Low Fast and scalable; designed for high performance. Little support in academia or industry except niche, high performance use cases.
Torch/PyTorch Torch provides numerous pre- trained models and development that is similar to traditional programming.

While Torch supported only the Lua language, PyTorch supports Python.

Facebook Low Low Uses standard debugging techniques

Supports distributed training.

PyTorch 1.0 was recently released (October 2018); change may be rapid.

Limited available Lua talent for Torch.

DeepLearning4J DeepLearning4J is written for Java and Scala, and supports a wide variety of networks. Eclipse Foundation Low Low Fast and scalable.

Can operate with an existing Java stack.

Lacking support for Python, its use is uncommon.

Few examples.

Chainer Chainer is a smaller library for Python, used extensively for natural language tasks (speech recognition, sentiment analysis and, translation). Preferred Networks Low Low Networks can be modified while running. Challenging to debug.

Few examples.

Digits NVIDIA’s framework is freely available to participants in the Company’s developer programme or users of the NVIDIA cloud.

Digits abstracts much of the programming into a visual interface, which allows researchers to focus on network design instead of coding data import routines or network architecture components.

Digits will operate with Tensorflow or on a standalone basis.

NVIDIA Low Low Enables rapid prototyping.

Highly optimised.

Low levels of support in academia and industry.

Few available examples.

Restrictive abstraction.

Keras Keras is a Python library that allows rapid prototyping of neural networks. Not a framework in its own right, we consider it here as an extension to Theano and TensorFlow. François Chollet Medium Medium Enables rapid prototyping and experimentation.

Accessible for beginners and useful for all levels of experience.

Requires additional frameworks.

Challenging to customise networks beyond the abstraction layer; re-working may be required to utilise underlying frameworks.

Source: MMC Ventures

“APIs can cost as little as tens or hundreds of pounds to use – making AI accessible to companies of all sizes and organisations that seek proof of value before committing to greater budgets.”