Skip to content

Commit 7cf2959

Browse files
committed
Run generate script
1 parent 1efb326 commit 7cf2959

File tree

3 files changed

+7
-3
lines changed

3 files changed

+7
-3
lines changed

docs/api-inference/tasks/question-answering.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -26,6 +26,7 @@ For more details about the `question-answering` task, check out its [dedicated p
2626

2727
- [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2): A robust baseline model for most question answering domains.
2828
- [distilbert/distilbert-base-cased-distilled-squad](https://huggingface.co/distilbert/distilbert-base-cased-distilled-squad): Small yet robust model that can answer questions.
29+
- [google/tapas-base-finetuned-wtq](https://huggingface.co/google/tapas-base-finetuned-wtq): A special model that can answer questions from tables.
2930

3031
This is only a subset of the supported models. Find the model that suits you best [here](https://huggingface.co/models?inference=warm&pipeline_tag=question-answering&sort=trending).
3132

docs/api-inference/tasks/table-question-answering.md

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -24,6 +24,7 @@ For more details about the `table-question-answering` task, check out its [dedic
2424

2525
### Recommended models
2626

27+
- [google/tapas-base-finetuned-wtq](https://huggingface.co/google/tapas-base-finetuned-wtq): A robust table question answering model.
2728

2829
This is only a subset of the supported models. Find the model that suits you best [here](https://huggingface.co/models?inference=warm&pipeline_tag=table-question-answering&sort=trending).
2930

@@ -34,7 +35,7 @@ This is only a subset of the supported models. Find the model that suits you bes
3435

3536
<curl>
3637
```bash
37-
curl https://api-inference.huggingface.co/models/<REPO_ID> \
38+
curl https://api-inference.huggingface.co/models/google/tapas-base-finetuned-wtq \
3839
-X POST \
3940
-d '{"inputs": { "query": "How many stars does the transformers repository have?", "table": { "Repository": ["Transformers", "Datasets", "Tokenizers"], "Stars": ["36542", "4512", "3934"], "Contributors": ["651", "77", "34"], "Programming language": [ "Python", "Python", "Rust, Python and NodeJS" ] } }}' \
4041
-H 'Content-Type: application/json' \
@@ -46,7 +47,7 @@ curl https://api-inference.huggingface.co/models/<REPO_ID> \
4647
```py
4748
import requests
4849

49-
API_URL = "https://api-inference.huggingface.co/models/<REPO_ID>"
50+
API_URL = "https://api-inference.huggingface.co/models/google/tapas-base-finetuned-wtq"
5051
headers = {"Authorization": "Bearer hf_***"}
5152

5253
def query(payload):
@@ -77,7 +78,7 @@ To use the Python client, see `huggingface_hub`'s [package reference](https://hu
7778
```js
7879
async function query(data) {
7980
const response = await fetch(
80-
"https://api-inference.huggingface.co/models/<REPO_ID>",
81+
"https://api-inference.huggingface.co/models/google/tapas-base-finetuned-wtq",
8182
{
8283
headers: {
8384
Authorization: "Bearer hf_***"

scripts/api-inference/scripts/generate.ts

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -110,6 +110,7 @@ const HAS_SNIPPET_FN = {
110110
js: snippets.js.hasJsInferenceSnippet,
111111
python: snippets.python.hasPythonInferenceSnippet,
112112
} as const;
113+
113114
interface MinimalModelData {
114115
id: string;
115116
pipeline_tag?: PipelineType;
@@ -118,6 +119,7 @@ interface MinimalModelData {
118119
config?: JsonObject;
119120
tags?: string[];
120121
}
122+
121123
export function getInferenceSnippet(
122124
modelData: MinimalModelData,
123125
language: InferenceSnippetLanguage,

0 commit comments

Comments
 (0)