# Impact of Multi-Modal Large Language Models

## What is “Multi-Modal”?

Multi Modal is a breakthrough AI technology that showcases the power of connecting a Large-Language-Model (LLM) with the ability to visually and semantically understand a video, text document, Xray, medical image, and other media--and be able to query or to otherwise interact with Natural Language on top of any type of media input. In the example shown, medical images are cross referenced with doctors’ diagnoses and examiner’s notes in order to replicate an expert’s view of the data with machine learning.

Imagine a world where medical screening could be enhanced or made accessible for 1/10th the current cost?

<figure><img src="/files/iW4ygM4ihSl40fybRIi2" alt="" width="563"><figcaption></figcaption></figure>

<div><figure><img src="/files/5ysqc5WN2zxZXUzHDREQ" alt=""><figcaption></figcaption></figure> <figure><img src="/files/goZYzs0CFnOIFK13JH5Q" alt="" width="375"><figcaption></figcaption></figure></div>


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://aigpu.gitbook.io/whitepaper/technology-and-development/impact-of-multi-modal-large-language-models.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
