Skip to content

Commit 1c71316

Browse files
committed
20251109
1 parent 25b6701 commit 1c71316

File tree

1 file changed

+0
-70
lines changed

1 file changed

+0
-70
lines changed

README.md

Lines changed: 0 additions & 70 deletions
Original file line numberDiff line numberDiff line change
@@ -1,73 +1,3 @@
1-
<!-- START doctoc generated TOC please keep comment here to allow auto update -->
2-
<!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE -->
3-
**Table of Contents**
4-
5-
- [cntext:面向社会科学研究的中文文本分析工具库](#cntext%E9%9D%A2%E5%90%91%E7%A4%BE%E4%BC%9A%E7%A7%91%E5%AD%A6%E7%A0%94%E7%A9%B6%E7%9A%84%E4%B8%AD%E6%96%87%E6%96%87%E6%9C%AC%E5%88%86%E6%9E%90%E5%B7%A5%E5%85%B7%E5%BA%93)
6-
- [安装 cntext](#%E5%AE%89%E8%A3%85-cntext)
7-
- [功能模块](#%E5%8A%9F%E8%83%BD%E6%A8%A1%E5%9D%97)
8-
- [QuickStart](#quickstart)
9-
- [一、IO 模块](#%E4%B8%80io-%E6%A8%A1%E5%9D%97)
10-
- [1.1 get_dict_list()](#11-get_dict_list)
11-
- [1.2 内置 yaml 词典](#12-%E5%86%85%E7%BD%AE-yaml-%E8%AF%8D%E5%85%B8)
12-
- [1.3 read_dict_yaml()](#13-read_dict_yaml)
13-
- [1.4 detect_encoding()](#14-detect_encoding)
14-
- [1.5 get_files(fformat)](#15-get_filesfformat)
15-
- [1.6 read_pdf](#16-read_pdf)
16-
- [1.7 read_docx](#17-read_docx)
17-
- [1.8 read_file()](#18-read_file)
18-
- [1.9 read_files()](#19-read_files)
19-
- [1.10 extract_mda](#110-extract_mda)
20-
- [1.11 traditional2simple()](#111-traditional2simple)
21-
- [1.12 fix_text()](#112-fix_text)
22-
- [1.13 fix_contractions(text)](#113-fix_contractionstext)
23-
- [1.14 clean_text(text)](#114-clean_texttext)
24-
- [二、Stats 模块](#%E4%BA%8Cstats-%E6%A8%A1%E5%9D%97)
25-
- [2.1 word_count()](#21-word_count)
26-
- [2.2 readability()](#22-readability)
27-
- [2.3 sentiment(text, diction, lang)](#23-sentimenttext-diction-lang)
28-
- [2.4 sentiment_by_valence()](#24-sentiment_by_valence)
29-
- [2.5 word_in_context()](#25-word_in_context)
30-
- [2.6 epu()](#26-epu)
31-
- [2.7 fepu()](#27-fepu)
32-
- [2.8 semantic_brand_score()](#28-semantic_brand_score)
33-
- [2.9 文本相似度](#29-%E6%96%87%E6%9C%AC%E7%9B%B8%E4%BC%BC%E5%BA%A6)
34-
- [2.10 word_hhi](#210-word_hhi)
35-
- [三、Plot 模块](#%E4%B8%89plot-%E6%A8%A1%E5%9D%97)
36-
- [3.1 matplotlib_chinese()](#31-matplotlib_chinese)
37-
- [3.2 lexical_dispersion_plot1()](#32-lexical_dispersion_plot1)
38-
- [3.3 lexical_dispersion_plot2()](#33-lexical_dispersion_plot2)
39-
- [四、Model 模块](#%E5%9B%9Bmodel-%E6%A8%A1%E5%9D%97)
40-
- [4.1 Word2Vec()](#41-word2vec)
41-
- [4.2 GloVe()](#42-glove)
42-
- [4.3 evaluate_similarity()](#43-evaluate_similarity)
43-
- [4.4 evaluate_analogy()](#44-evaluate_analogy)
44-
- [4.5 SoPmi()](#45-sopmi)
45-
- [4.6 load_w2v()](#46-load_w2v)
46-
- [4.7 glove2word2vec()](#47-glove2word2vec)
47-
- [注意](#%E6%B3%A8%E6%84%8F)
48-
- [4.8 expand_dictionary()](#48-expand_dictionary)
49-
- [五、Mind 模块](#%E4%BA%94mind-%E6%A8%A1%E5%9D%97)
50-
- [5.1 semantic_centroid(wv, words)](#51-semantic_centroidwv-words)
51-
- [5.2 generate_concept_axis(wv, poswords, negwords)](#52-generate_concept_axiswv-poswords-negwords)
52-
- [5.3 sematic_distance()](#53-sematic_distance)
53-
- [5.4 sematic_projection()](#54-sematic_projection)
54-
- [5.5 project_word](#55-project_word)
55-
- [5.6 project_text()](#56-project_text)
56-
- [5.7 divergent_association_task()](#57-divergent_association_task)
57-
- [5.8 discursive_diversity_score()](#58-discursive_diversity_score)
58-
- [5.8 procrustes_align()](#58-procrustes_align)
59-
- [六、LLM 模块](#%E5%85%ADllm-%E6%A8%A1%E5%9D%97)
60-
- [6.1 ct.llm()](#61-ctllm)
61-
- [6.2 内置prompt](#62-%E5%86%85%E7%BD%AEprompt)
62-
- [使用声明](#%E4%BD%BF%E7%94%A8%E5%A3%B0%E6%98%8E)
63-
- [apalike](#apalike)
64-
- [bibtex](#bibtex)
65-
- [endnote](#endnote)
66-
67-
<!-- END doctoc generated TOC please keep comment here to allow auto update -->
68-
69-
70-
711
## cntext:面向社会科学研究的中文文本分析工具库
722
cntext 是专为**社会科学实证研究者**设计的中文文本分析 Python 库。它不止于词频统计式的传统情感分析,还拥有词嵌入训练、语义投影计算,**可从大规模非结构化文本中测量抽象构念**——如态度、认知、文化观念与心理状态。
733

0 commit comments

Comments
 (0)