Path To Next Generation Search

Google announced a breakthrough in its effort to create an AI architecture that can handle millions of different tasks, including complex learning and reasoning. The new system is called the Pathway Language Model, called PaLM.

PaLM is the current state of the art AI capable of beating humans in language and reasoning tests as well.

But the researchers also point out that they cannot shake the limitations inherent in large-scale language models that can inadvertently lead to negative moral consequences.

background information

The next few sections are background information that explains what this algorithm is about.

few-shot learning

Few-shot learning is the next stage of learning that is moving beyond deep learning.

Google Brain researcher Hugo LaRochelle (@hugo_larochelle) said in a presentation titled, Generalization from some examples with meta-learning ,Video) explained that with deep learning, the problem is that they have to collect huge amounts of data that require a significant amount of human labor.

He pointed out that deep learning probably won’t be the road leading to AI that can solve multiple tasks because with deep learning, each task requires millions of examples, allowing each ability to learn from what the AI ​​does. For.

LaRochelle explains:

“… the idea that we’ll try to attack this problem very directly is the few-shot learning problem, which is the problem of generalization from small amounts of data.

… the main idea in what I’ll present is that instead of trying to define that learning algorithm by n and use our intuition as to what is the right algorithm for some-shot learning, but not actually using that algorithm Try to learn the way to the end.

And that’s why we call it learning to learn or I like to call it meta learning.”

The goal with the few-shot approach is to predict how humans learn different things and can apply different bits of knowledge together to solve new problems that have never been encountered before.

Benefit then is a machine that can take advantage of all that knowledge to solve new problems.

In the case of PaLM, an example of this ability is its ability to explain a joke it has never encountered before.

way ai

In October 2021 Google published an article setting out the goals for a new AI architecture called Pathways.

The pathway represented a new chapter in the ongoing progress in developing AI systems.

The general approach was to create algorithms that were trained to do specific things very well.

Pathway’s approach is to build a single AI model that can solve all problems by learning to do it, thus avoiding the less efficient way of training thousands of algorithms to complete thousands of different tasks.

According to the Pathway docs:

“Instead, we want to train a model that can not only handle many different tasks, but can also draw on and combine our existing skills to learn new tasks faster and more effectively.” .

Thus, what a model learns by training on one task – for example, learning how aerial images can predict the height of a landscape – can help it learn another task – such as predicting where the terrain will be. How will the flood water flow? ,

Pathway defined Google’s path to take AI to the next level by bridging the gap between machine learning and human learning.

Google’s latest model, called the Pathway Language Model (PaLM), is the next step and according to this new research paper, PaLM represents a significant advance in the field of AI.

What makes Google PaLM remarkable

PaLM measures the process of few-shot learning.

According to the research paper:

“Large language models have been shown to achieve remarkable performance in a variety of natural language tasks using few-shot learning, which significantly reduces the number of task-specific training examples required to adapt the model to a particular application.” gives.

To further our understanding of the effect of scale on few-shot learning, we trained a 540 billion parameter, densely active, transformer language model, which we call the Pathway Language Model (PALM).

There are many research papers published that describe algorithms that do not outperform the current state of the art or achieve only incremental improvements.

Not so with PaLM. The researchers claim significant improvements over existing best models and even outperform human benchmarks.

This level of success is what makes this new algorithm remarkable.

The researchers write:

“We demonstrate the continued benefits of scaling by achieving state-of-the-art few-shot learning results on hundreds of language comprehension and generation benchmarks.

On many of these tasks, the PaLM 540B achieved breakthrough performance, outperformed state-of-the-art on a suite of multi-step reasoning tasks, and outperformed average human performance on the recently released BIG-Bench benchmarks .

A large number of large-bench tasks showed continuous improvement from model scale, meaning that performance increased rapidly as we reached our largest model.

PaLM improves the state of the art in English natural language processing tasks and this makes PaLM important and remarkable.

On a collaborative benchmark called Big-Bench consisting of over 150 tasks (related to reasoning, translation, question answering), PaLM outperformed the state of the art, but there were areas where it did not.

Worth noting is that human performance outperformed PaLM on 35% of tasks, especially mathematically related tasks (see section 6.2 BIG-bench of the research paper, page 17).

PaLM was better able to translate any other language into English than it was at translating English into other languages. The researchers said this is a common problem that can be solved by prioritizing more multilingual data.

Still, PaLM outperformed other language models and humans across the board.

reasoning ability

Of particular note was the performance with logic functions of arithmetic and common sense.

Example of an arithmetic function:

Roger has 5 tennis balls. He buys 2 more boxes of tennis balls. Each can hold 3 tennis balls. How many tennis balls does he have now?

Answer is 11.

Example of commonsense reasoning:

Sean was in a hurry to go home, but the lights turned yellow and he was forced to do what?
Answer options: (a) take time (b) doodle (c) go slowly (d) ocean (e) slow

Answer is (e) slow.

generate an explanation

Another exceptional example of how advanced PaLM is in tests to generate explanations.

Here’s how the research paper explains it:

“The goal of this section is to demonstrate PaLM’s capabilities with respect to interpretive language construction, for signs that require complex combinations of logical inference series, world knowledge, abstract language comprehension, and lexical language comprehension (for example, pun). is required.”

jokes explained

Here’s an example of a language model explaining a joke:

I was going to fly to visit my family on 6th April. My mom said “Oh great, reading your stepfather’s poem that night!”

So now I’m flying on April 7.

Model Explanation:
The joke is that the speaker’s mother is trying to take them to read her stepfather’s poem, but the speaker doesn’t want to go, so they are changing their flight the next day after reading the poem.

guess series

Here is an example of a conjecture series, where several concepts and entities need to be correctly linked and considered in order to form an answer.

Guido was writing a Python program. He’s getting confused by variable scope and seeing this on stackoverflow.

Is it possible that Guido refers to Van Rossum in the Guido prompt?

model output:
Guido van Rossum is the creator of Python. He wouldn’t have to look up variable scope on StackOverflow. The answer is no.”

Here is another example:

Jennifer looked out of her window and saw a very nice cloud beneath her. She unfastens her seatbelt and goes to the bathroom. Is Jennifer perhaps traveling at more than 300 mph relative to Earth?

model output:
300 mph is about 480 km/h. This is about the speed of a commercial airplane. Clouds are usually under airplanes, so Jennifer is probably in an airplane.

the answer is yes”.”

Next generation search engine?

The above example of PaLM’s potential for complex reasoning demonstrates how the next generation of search engines may be able to use knowledge from the Internet and other sources to answer complex answers.

Achieving an AI architecture that can produce answers that reflect the world around us is one of Google Pathways’ stated goals, and PaLM is a step in that direction.

However, the authors of the research stress that PaLM is not the final word on AI and discovery. They were clearly saying that PaLM is the first step towards the next type of search engine that Pathways envisions.

Before we go any further, there are two words, so to speak, jargon, that are important to understand what PaLM is.

  • modality
  • generalization

word “Ways“There is reference to how things are experienced or the state in which they exist, e.g. text is read, images are seen, things are heard.

word “generalization“The context of machine learning is about the ability of a language model to solve tasks on which it has not been trained before.

The researchers noted:

“PaLM is only the first step in our vision towards establishing Pathway as the future of ML scaling for Google and beyond.

We believe that PaLM displays a strong foundation in our ultimate goal of developing a large-scale, modular system that will have broad generalization capabilities across multiple modalities.

Real world risk and ethical considerations

Something different about this research paper is that the researchers caution about ethical considerations.

They say that large-scale language models trained on Web data absorb many of the “toxic” stereotypes and social inequalities spread across the Web and that PaLM is not immune to those unwanted effects.

research paper cites a research paper from 2021 It explores how large-scale language models can promote the following disadvantages:

  1. Discrimination, exclusion and toxicity
  2. information threats
  3. misinformation hurts
  4. malicious use
  5. harms human-computer interaction
  6. Automation, Accessibility, and Environmental Damage

Finally, the researchers note that PaLM does indeed reflect toxic social stereotypes and clarify that filtering out these biases is challenging.

The PaLM researchers explain:

“Our analysis shows that our training data, and consequently the PaLM, reflect different social stereotypes and toxicity associations around the identified conditions.

However, removing these associations is non-trivial… Future work should effectively deal with such undesirable biases in the data and their impact on model behavior.

In the meantime, any real-world use of PaLM for downstream tasks should warrant a more relevant fairness assessment to assess potential harm and initiate appropriate mitigation and protection.

PaLM can be viewed as a glimpse of what the next generation of discovery will look like. PaLM makes extraordinary claims to be the best of the state of the art, but researchers also say there is more work to be done, including finding a way to reduce the harmful spread of misinformation, toxic stereotypes and other unwanted outcomes .


Read Google’s AI Blog article about PaLM

Pathway Language Model (PALM): Scaling up to 540 Billion Parameters for Breakthrough Performance

Read Google research paper on PaLM

PaLM: Scaling Language Modeling with Pathways ,PDF,

Source link

Leave a Comment