Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions sources/platform/integrations/ai/haystack.md
Original file line number Diff line number Diff line change
Expand Up @@ -185,4 +185,5 @@ To run it, you can use the following command: `python apify_integration.py`
- [Apify-haystack integration documentation](https://haystack.deepset.ai/integrations/apify)
- [Apify-haystack integration source code](https://github.com/apify/apify-haystack)
- [Example: RAG - Extract and use website content for question answering](https://haystack.deepset.ai/cookbook/apify_haystack_rag)
- [Example: RAG: Web Search and Analysis with Apify and Haystack](https://haystack.deepset.ai/cookbook/apify_haystack_rag_web_browser)
- [Example: Analyze Your Instagram Comments’ Vibe](https://haystack.deepset.ai/cookbook/apify_haystack_instagram_comments_analysis)
38 changes: 25 additions & 13 deletions sources/platform/integrations/ai/langchain.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ If you prefer to use JavaScript, you can follow the [JavaScript LangChain docum

Before we start with the integration, we need to install all dependencies:

`pip install apify-client langchain langchain_community langchain_openai openai tiktoken`
`pip install langchain langchain-openai langchain-apify`

After successful installation of all dependencies, we can start writing code.

Expand All @@ -30,9 +30,10 @@ First, import all required packages:
import os

from langchain.indexes import VectorstoreIndexCreator
from langchain_community.utilities import ApifyWrapper
from langchain_core.document_loaders.base import Document
from langchain_openai import OpenAI
from langchain_apify import ApifyWrapper
from langchain_core.documents import Document
from langchain_core.vectorstores import InMemoryVectorStore
from langchain_openai import ChatOpenAI
from langchain_openai.embeddings import OpenAIEmbeddings
```

Expand All @@ -49,6 +50,7 @@ Note that if you already have some results in an Apify dataset, you can load the

```python
apify = ApifyWrapper()
llm = ChatOpenAI(model="gpt-4o-mini")

loader = apify.call_actor(
actor_id="apify/website-content-crawler",
Expand All @@ -68,14 +70,17 @@ The Actor call may take some time as it crawls the LangChain documentation websi
Initialize the vector index from the crawled documents:

```python
index = VectorstoreIndexCreator(embedding=OpenAIEmbeddings()).from_loaders([loader])
index = VectorstoreIndexCreator(
vectorstore_cls=InMemoryVectorStore,
embedding=OpenAIEmbeddings()
).from_loaders([loader])
```

And finally, query the vector index:

```python
query = "What is LangChain?"
result = index.query_with_sources(query, llm=OpenAI())
result = index.query_with_sources(query, llm=llm)

print("answer:", result["answer"])
print("source:", result["sources"])
Expand All @@ -87,15 +92,17 @@ If you want to test the whole example, you can simply create a new file, `langch
import os

from langchain.indexes import VectorstoreIndexCreator
from langchain_community.utilities import ApifyWrapper
from langchain_core.document_loaders.base import Document
from langchain_openai import OpenAI
from langchain_apify import ApifyWrapper
from langchain_core.documents import Document
from langchain_core.vectorstores import InMemoryVectorStore
from langchain_openai import ChatOpenAI
from langchain_openai.embeddings import OpenAIEmbeddings

os.environ["OPENAI_API_KEY"] = "Your OpenAI API key"
os.environ["APIFY_API_TOKEN"] = "Your Apify API token"

apify = ApifyWrapper()
llm = ChatOpenAI(model="gpt-4o-mini")

print("Call website content crawler ...")
loader = apify.call_actor(
Expand All @@ -104,9 +111,12 @@ loader = apify.call_actor(
dataset_mapping_function=lambda item: Document(page_content=item["text"] or "", metadata={"source": item["url"]}),
)
print("Compute embeddings...")
index = VectorstoreIndexCreator(embedding=OpenAIEmbeddings()).from_loaders([loader])
index = VectorstoreIndexCreator(
vectorstore_cls=InMemoryVectorStore,
embedding=OpenAIEmbeddings()
).from_loaders([loader])
query = "What is LangChain?"
result = index.query_with_sources(query, llm=OpenAI())
result = index.query_with_sources(query, llm=llm)

print("answer:", result["answer"])
print("source:", result["sources"])
Expand All @@ -117,9 +127,11 @@ To run it, you can use the following command: `python langchain_integration.py`
After running the code, you should see the following output:

```text
answer: LangChain is a framework for developing applications powered by language models. It provides standard, extendable interfaces, external integrations, and end-to-end implementations for off-the-shelf use. It also integrates with other LLMs, systems, and products to create a vibrant and thriving ecosystem.
answer: LangChain is a framework designed for developing applications powered by large language models (LLMs). It simplifies the
entire application lifecycle, from development to productionization and deployment. LangChain provides open-source components a
nd integrates with various third-party tools, making it easier to build and optimize applications using language models.

source: https://python.langchain.com
source: https://python.langchain.com/docs/get_started/introduction
```

LangChain is a standard interface through which you can interact with a variety of large language models (LLMs).
Expand Down
160 changes: 160 additions & 0 deletions sources/platform/integrations/ai/langgraph.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,160 @@
---
title: 🦜🔘➡️ LangGraph integration
sidebar_label: LangGraph
description: Learn how to build AI Agents with Apify and LangGraph 🦜🔘➡️.
sidebar_position: 1
slug: /integrations/langgraph
---

**Learn how to build AI Agents with Apify and LangGraph.**

---

## What is LangGraph

[LangGraph](https://www.langchain.com/langgraph) is a framework designed for constructing stateful, multi-agent applications with Large Language Models (LLMs), allowing developers to build complex AI agent workflows that can leverage tools, APIs, and databases.

:::note Explore LangGraph

For more in-depth details on LangGraph, check out its [official documentation](https://langchain-ai.github.io/langgraph/).

:::

## How to use Apify with LangGraph

This guide will demonstrate how to use Apify Actors with LangGraph by building a ReAct agent that will use the [RAG Web Browser](https://apify.com/apify/rag-web-browser) Actor to search Google for TikTok profiles and [TikTok Data Extractor](https://apify.com/clockworks/free-tiktok-scraper) Actor to extract data from the TikTok profiles to analyze the profiles.

### Prerequisites

- **Apify API token**: To use Apify Actors in LangGraph, you need an Apify API token. If you don't have one, you can learn how to obtain it in the [Apify documentation](https://docs.apify.com/platform/integrations/api).

- **OpenAI API key**: In order to work with agents in LangGraph, you need an OpenAI API key. If you don't have one, you can get it from the [OpenAI platform](https://platform.openai.com/account/api-keys).

- **Python packages**: You need to install the following Python packages:

```bash
pip install langgraph langchain-apify langchain-openai
```

### Building the TikTok profile search and analysis agent

First, import all required packages:

```python
import os

from langchain_apify import ApifyActorsTool
from langchain_core.messages import HumanMessage
from langchain_openai import ChatOpenAI
from langgraph.prebuilt import create_react_agent
```

Next, set the environment variables for the Apify API token and OpenAI API key:

```python
os.environ["OPENAI_API_KEY"] = "Your OpenAI API key"
os.environ["APIFY_API_TOKEN"] = "Your Apify API token"
```

Instantiate LLM and Apify Actors tools:

```python
llm = ChatOpenAI(model="gpt-4o-mini")

browser = ApifyActorsTool("apify/rag-web-browser")
tiktok = ApifyActorsTool("clockworks/free-tiktok-scraper")
```

Create the ReAct agent with the LLM and Apify Actors tools:

```python
tools = [browser, tiktok]
agent_executor = create_react_agent(llm, tools)
```

Finally, run the agent and stream the messages:

```python
for state in agent_executor.stream(
stream_mode="values",
input={
"messages": [
HumanMessage(content="Search the web for OpenAI TikTok profile and analyze their profile.")
]
}):
state["messages"][-1].pretty_print()
```

:::note Search and analysis may take some time

The agent tool call may take some time as it searches the web for OpenAI TikTok profiles and analyzes them.

:::

You will see the agent's messages in the console, which will show each step of the agent's workflow.

```text
================================ Human Message =================================

Search the web for OpenAI TikTok profile and analyze their profile.
================================== AI Message ==================================
Tool Calls:
apify_actor_apify_rag-web-browser (call_y2rbmQ6gYJYC2lHzWJAoKDaq)
Call ID: call_y2rbmQ6gYJYC2lHzWJAoKDaq
Args:
run_input: {"query":"OpenAI TikTok profile","maxResults":1}

...

================================== AI Message ==================================

The OpenAI TikTok profile is titled "OpenAI (@openai) Official." Here are some key details about the profile:

- **Followers**: 592.3K
- **Likes**: 3.3M
- **Description**: The profile features "low key research previews" and includes videos that showcase their various projects and research developments.

### Profile Overview:
- **Profile URL**: [OpenAI TikTok Profile](https://www.tiktok.com/@openai?lang=en)
- **Content Focus**: The posts primarily involve previews of OpenAI's research and various AI-related innovations.

...

```


If you want to test the whole example, you can simply create a new file, `langgraph_integration.py`, and copy the whole code into it.

```python
import os

from langchain_apify import ApifyActorsTool
from langchain_core.messages import HumanMessage
from langchain_openai import ChatOpenAI
from langgraph.prebuilt import create_react_agent

os.environ["OPENAI_API_KEY"] = "Your OpenAI API key"
os.environ["APIFY_API_TOKEN"] = "Your Apify API token"

llm = ChatOpenAI(model="gpt-4o-mini")

browser = ApifyActorsTool("apify/rag-web-browser")
tiktok = ApifyActorsTool("clockworks/free-tiktok-scraper")

tools = [browser, tiktok]
agent_executor = create_react_agent(llm, tools)

for state in agent_executor.stream(
stream_mode="values",
input={
"messages": [
HumanMessage(content="Search the web for OpenAI TikTok profile and analyze their profile.")
]
}):
state["messages"][-1].pretty_print()
```

## Resources

- [Apify Actors](https://docs.apify.com/platform/actors)
- [LangGraph - How to Create a ReAct Agent](https://langchain-ai.github.io/langgraph/how-tos/create-react-agent/)