What's the difference between data science, machine learning, and artificial intelligence? mlcertific.com

Data science produces insights
Data science is distinguished from the other two fields because its goal is an especially human one: to gain insight and understanding.
This definition of data science thus emphasizes:
Statistical inference
Data visualization
Experiment design
Domain knowledge
Communication
Machine learning produces predictions
I think of machine learning as the field of prediction: of “Given instance X with particular features, predict Y about it”. These predictions could be about the future (“predict whether this patient will go into sepsis”), but they also could be about qualities that aren’t immediately obvious to a computer (“predict whether this image has a bird in it”). Almost all Kaggle competitions qualify as machine learning problems: they offer some training data, and then see if competitors can make accurate predictions about new examples.
Artificial intelligence produces actions
Artificial intelligence is by far the oldest and the most widely recognized of these three designations, and as a result it’s the most challenging to define. The term is surrounded by a great deal of hype, thanks to researchers, journalists, and startups who are looking for money or attention.
  Deepa Rathi on 12:20 AM 03 Jan 2021

MLOps platforms is forecast to generate annual revenues in excess of $4 billion

Deloitte Consulting published a report today that suggests a golden age of AI is in the offing, assuming organizations can implement and maintain a consistent approach to machine learning operations (MLOps).
Citing market research conducted by AI-focused Cognilytica, the MLOps: Industrialized AI report from Deloitte notes that the market for MLOps platforms is forecast to generate annual revenues in excess of $4 billion by 2025.
Several startups are already focused on providing these platforms. Less clear, however, is the degree to which MLOps might become an extension of the DevOps platforms many organizations rely on today to build and deploy software.
At the crux of that debate is the way organizations currently build and deploy AI models. The average data science team is lucky if they can build and deploy two AI models a year. In the wake of the COVID-19 pandemic, however, organizations have accelerated their investments in AI as part of an effort to drive digital business transformations, Deloitte AI Institute executive director Beena Ammanath said. “This space is going to heat up in the next 18 months,” Ammanath said.
  Deepa Rathi on 08:21 AM 19 Dec 2020

Why Do Organizations Need MLOps mlcertific.com

Many organizations are working on machine learning and artificial intelligence (AI). Some are already developed of artificial intelligence through increased productivity and revenue. However, for most organizations embarking on this transformational journey, the results are yet to be seen and for those who are already underway, scaling their results appears as completely uncharted waters.
According to a survey only 20% of leading enterprises have deployed AI capabilities into production at any scale. Most of these leading organizations have significant AI investments, but their path to tangible business benefits is challenging, to say the least. There are a number of reasons for this that we find to be reoccuring practically everywhere.
There’s a skill, motivation, and incentive gap between teams developing machine learning models (data scientists) and operators of those models (DevOps, software developers, IT, etc.). There are a plethora of issues here, which vary by organization and business unit. Here are a few examples:
Lack of available data science talent means that when organizations find someone with the right experience, they allow these individuals to operate in an environment that’s most suitable for them, which leads to the next problem.
Models are typically created using DS-friendly languages and platforms, who are typically suboptimal or more so, unfamiliar to Ops teams and their services, who were designed for regular software languages and platforms.
Ops teams are geared towards optimizing run time environments upon their cloud, resource managers, role based services, etc. Data Science teams are not only unaware of any considerations these dependencies require, but are typically oblivious to them altogether and hence models they create do not take these into consideration at all.
Lack of a proper native governance structure as pertains to Machine Learning models, with system, lifecycle, user logs, stifles troubleshooting, as well as legal and regulatory reporting.
Organizations that don’t properly monitor their models end up introducing what could possibly become immense risk to their organizations due to production models that don’t reflect the ever-changing patterns in data, user/consumer behavior, and a host of other issues that may affect the accuracy of the model and that will not be altered upon when they happen.
  Deepa Rathi on 05:43 PM 17 Nov 2020

The danger of omission in data preparation mlcertific.com

The omission of data is quite common, and it doesn’t just occur when you remove a category.
Suppose you’re trying to decide who is qualified for a loan. Even the best models will have a certain margin of error because you’re not looking at all of the people that didn’t end up getting a loan. Some people who wanted loans may have never come into the bank in the first place, or maybe they walked in and didn’t make it to your desk; they were scared away based on the environment or got nervous that they would not be successful.
As such, your model may not contain the comprehensive set of data points it needs to make a decision.
Similarly, companies that rely very heavily on machine learning models often fail to realize that they are using data from way too many “good” customers and that they simply don’t have enough data points to recognize the “bad” ones. This can really mess with your data.
You can see this kind of selection bias at work in academia, life sciences in particular. The “publish or perish mantra” has long ruled. Even so, how many journal articles do you remember seeing that document failed studies? No one puts forth papers that say, “I tried this, and it really didn’t work.” Not only does it take an incredible amount of time to prepare a study for publication, the author gains nothing from pushing out results of a failed study. If I did that, my university might look at my work and say, “Michael, 90% of your papers have had poor results. What are you doing?” That is why you only see positive or promising results in journals. At a time when we’re trying to learn as much as we can about COVID-19 treatments and potential vaccines, the data from failures is really important, but we are not likely to learn much about them because of how the system works, because of what data was selected for sharing.
  Deepa Rathi on 11:57 PM 15 Nov 2020

AWS announces Contact Center Intelligence solutions

AWS Contact Center Intelligence (CCI) solutions enable customers using contact center solutions to leverage off-the-shelf functionality powered by machine learning like text-to-speech, translation, enterprise search, chatbots, business intelligence, and language comprehension. AWS CCI solutions allow customers to gain greater efficiencies and deliver increasingly tailored customer experiences within their existing contact center platform—with no machine learning expertise required.
AWS CCI solutions are focused on three stages of the contact center workflow: Self-Service, Live Call Analytics & Agent Assist, and Post-Call Analytics, and are available through participating APN partners. AWS CCI solutions leverage AWS AI Services, such as Amazon Lex, Amazon Kendra, Amazon Transcribe, Amazon Translate, and Amazon Comprehend.
  Admin Aditya Suman on 12:22 AM 20 Aug 2020

Google-affiliated researchers released the Language Interpretability Tool (LIT)

The Language Interpretability Tool (LIT) is a visual, interactive model-understanding tool for NLP models.
LIT is built to answer questions such as:
What kind of examples does my model perform poorly on?
Why did my model make this prediction? Can this prediction be attributed to adversarial behavior, or to undesirable priors in the training set?
Does my model behave consistently if I change things like textual style, verb tense, or pronoun gender?
LIT supports a variety of debugging workflows through a browser-based UI. Features include:
Local explanations via salience maps, attention, and rich visualization of model predictions.
Aggregate analysis including custom metrics, slicing and binning, and visualization of embedding spaces.
Counterfactual generation via manual edits or generator plug-ins to dynamically create and evaluate new examples.
Side-by-side mode to compare two or more models, or one model on a pair of examples.
Highly extensible to new model types, including classification, regression, span labeling, seq2seq, and language modeling. Supports multi-head models and multiple input features out of the box.
Framework-agnostic and compatible with TensorFlow, PyTorch, and more.
  Admin Aditya Suman on 12:57 AM 17 Aug 2020

load more...