Skip to content

Commit 0712f5c

Browse files
Update API inference documentation (automated) (#1506)
Co-authored-by: hanouticelina <[email protected]>
1 parent c1161c6 commit 0712f5c

File tree

2 files changed

+5
-3
lines changed

2 files changed

+5
-3
lines changed

docs/api-inference/tasks/question-answering.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -26,6 +26,7 @@ For more details about the `question-answering` task, check out its [dedicated p
2626

2727
- [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2): A robust baseline model for most question answering domains.
2828
- [distilbert/distilbert-base-cased-distilled-squad](https://huggingface.co/distilbert/distilbert-base-cased-distilled-squad): Small yet robust model that can answer questions.
29+
- [google/tapas-base-finetuned-wtq](https://huggingface.co/google/tapas-base-finetuned-wtq): A special model that can answer questions from tables.
2930

3031
Explore all available models and find the one that suits you best [here](https://huggingface.co/models?inference=warm&pipeline_tag=question-answering&sort=trending).
3132

docs/api-inference/tasks/table-question-answering.md

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -24,6 +24,7 @@ For more details about the `table-question-answering` task, check out its [dedic
2424

2525
### Recommended models
2626

27+
- [google/tapas-base-finetuned-wtq](https://huggingface.co/google/tapas-base-finetuned-wtq): A robust table question answering model.
2728

2829
Explore all available models and find the one that suits you best [here](https://huggingface.co/models?inference=warm&pipeline_tag=table-question-answering&sort=trending).
2930

@@ -34,7 +35,7 @@ Explore all available models and find the one that suits you best [here](https:/
3435

3536
<curl>
3637
```bash
37-
curl https://api-inference.huggingface.co/models/<REPO_ID> \
38+
curl https://api-inference.huggingface.co/models/google/tapas-base-finetuned-wtq \
3839
-X POST \
3940
-d '{"inputs": { "query": "How many stars does the transformers repository have?", "table": { "Repository": ["Transformers", "Datasets", "Tokenizers"], "Stars": ["36542", "4512", "3934"], "Contributors": ["651", "77", "34"], "Programming language": [ "Python", "Python", "Rust, Python and NodeJS" ] } }}' \
4041
-H 'Content-Type: application/json' \
@@ -46,7 +47,7 @@ curl https://api-inference.huggingface.co/models/<REPO_ID> \
4647
```py
4748
import requests
4849

49-
API_URL = "https://api-inference.huggingface.co/models/<REPO_ID>"
50+
API_URL = "https://api-inference.huggingface.co/models/google/tapas-base-finetuned-wtq"
5051
headers = {"Authorization": "Bearer hf_***"}
5152

5253
def query(payload):
@@ -77,7 +78,7 @@ To use the Python client, see `huggingface_hub`'s [package reference](https://hu
7778
```js
7879
async function query(data) {
7980
const response = await fetch(
80-
"https://api-inference.huggingface.co/models/<REPO_ID>",
81+
"https://api-inference.huggingface.co/models/google/tapas-base-finetuned-wtq",
8182
{
8283
headers: {
8384
Authorization: "Bearer hf_***",

0 commit comments

Comments
 (0)