First draft of an awesome list of Cognitive Metascience lab's (University of Warsaw) list of resources.
Welcome to the GitHub page of our Cognitive Metascience Lab. Here, we aim to create a platform where we can share resources and knowledge with anyone interested in our field or related areas of study. Our mission is to contribute to the advancement of cognitive metascience and promote open access to valuable resources. Whether you're a researcher, student, or enthusiast, we invite you to check out our curated collection and contribute your own insights, tools, or findings. This platform is open to everyone, and we believe that collective knowledge-sharing can guide us forward in our pursuit of understanding the complexities of this scientific field. So feel free to browse, engage, and join us in exploration and discovery of sources that make out scientific work a bit easier.
In the following sections you will find list of resources for NLP and citation analysis as well as a literature overview for scientific theories.
From robust language models like SciBERT to annotation platforms like Doccano, these resources offer a diverse array of tools designed for text analysis and understanding. Whether you're exploring sentiment analysis, entity recognition, or document summarization, these tools provide the necessary infrastructure for work into the field of natural language processing, making them useful for anyone interested in advancing their understanding of linguistics, machine learning, or cognitive science.
Tools
By using Lingo46 you can get an overview of thousands of documents within seconds, instantly drill-down to documents of interest. You can build custom text mining pipelines ranging from simple search to 2D mapping, time-series analysis and duplicate detection. You can combine topics, clusters and 2D document maps into powerful visualizations.
The basic workflow of Cortex Manager is following: (1) upload raw files from various scientific bibliography databases (ISI Thomson Web of Science, Pubmed, etc) and simple CSV files, (2) transform text files into standard corpus database file, (3) perform a series of graphical analysis to produce descriptive statistical analysis, social graphs of entities and timeline based phylogenetic reconstructions, (4) download outputs in format compatible with third party software.
- Text Visualization Browser
- SAGE Texti WEBSITE EXPIRED
- doccano
Doccano is an open-source text annotation tool for humans. It provides annotation features for text classification, sequence labeling, and sequence to sequence tasks. You can create labeled data for sentiment analysis, named entity recognition, text summarization, and so on. Just create a project, upload data, and start annotating. You can build a dataset in hours.
SciBERT is a BERT model trained on scientific text. SciBERT is trained on papers from the corpus of semanticscholar.org. Corpus size is 1.14M papers, 3.1B tokens. We use the full text of the papers in training, not just abstracts. SciBERT has its own vocabulary (scivocab) that's built to best match the training corpus. We trained cased and uncased versions. We also include models trained on the original BERT vocabulary (basevocab) for comparison. It results in state-of-the-art performance on a wide range of scientific domain nlp tasks.
CADE can and has been used for several different tasks: from general temporal vector space alignment [1] and a more general comparison of language variation [2], to tasks like semantic change detection in diachronic contexts [3,6] and narrative understanding [5].
The CRExplorer uses data from Web of Science (Clarivate Analytics) or Scopus (Elsevier) as input. Publication sets have to be downloaded including the references cited. The program focusses on the analysis of the cited references, in particular on the referenced publication years. Over time, "citation classics" of a field become more pronounced. When the aggregated citations are plotted along the time axis, one obtains a "spectrogram" with distinct peaks. CRExplorer visualizes this spectrogram, cleans the cited references (disambiguation), and uses a smoothing algorithm to suppress the noise.
Weaviate is an open source vector database that stores both objects and vectors, allowing for combining vector search with structured filtering with the fault-tolerance and scalability of a cloud-native database, all accessible through GraphQL, REST, and various language clients.
paperai is a semantic search and workflow application for medical/scientific papers. Applications range from semantic search indexes that find matches for medical/scientific queries to full-fledged reporting applications powered by machine learning.
Interact, discover insights and build with unstructured text, image and audio data. Uncover data insights from your text and images - right from your web browser. Make sense of your data with AI computed topics, data labels and groupings and embeddings. Share text, image, and embeddings datasets with your team or customers. Scales from 100 to 100 million unstructured datapoints.
Ask any question to two anonymous models (e.g., ChatGPT, Claude, Llama) and vote for the better one! You can continue chatting until you identify a winner. Vote won't be counted if model identity is revealed during conversation.
Overview of many useful AI tools.
OP Vault uses the OP Stack (OpenAI + Pinecone Vector Database) to enable users to upload their own custom knowledgebase files and ask questions about their contents. The primary focus is on human-readable content like books, letters, and other documents, making it a practical and valuable tool for knowledge extraction and question-answering. You can upload an entire library's worth of books and documents and recieve pointed answers along with the name of the file and specific section within the file that the answer is based on!
Twarc is a command line tool and Python library for collecting and archiving Twitter JSON data via the Twitter API. It has separate commands (twarc and twarc2) for working with the older v1.1 API and the newer v2 API and Academic Access (respectively).
Articles
Additional resources
The Citation Analysis tools featured here are essential for researchers seeking to navigate the scholarly landscape efficiently. From databases like Microsoft Academic Graph to visualization tools like CiteSpace, these resources empower users to uncover trends and gain insights into academic discourse through bibliometric analysis.
Check it out!
An aggregator for diverse open knowledge sets, including scholarly works and patents. Offers discovery and analytics tools, ‘The Lens MetaRecord’ integrates multiple identifiers and sources to provide comprehensive and normalized metadata while maintaining provenance.
SN SciGraph speeds up content discovery and broadens the scope of research by exposing previously unseen patterns and presenting new perspectives (by linking Springer Nature publications to other data types such as grants, conferences, and freely available taxonomies). Provides programmatic access to SN SciGraph data via an API.
Free Windows-based program, designed for altmetrics, citation analysis, social web analysis, and webometrics, including link analysis. Data is downloaded through APIs or directly, and various text and citation processing options are provided. Altmetrics and citation analysis cover data sources like Mendeley, Altmetric.com, Google Books, and WorldCat. Social web analysis includes platforms such as YouTube, Twitter, Goodreads, and Flickr.
Knowledge graph with scientific publications, citation relationships and authors; supporting various applications. Updated bi-weekly, it offers an Azure Storage distribution service for research scenarios. Microsoft Academic Knowledge API, for lightweight usage, has a monthly quota and traffic throttles.
Database that integrates data and analytical tools in a single platform with over 106 million publications linked to grants, patents, clinical trials, datasets, policy papers, citations and article metrics.
Visual analytic tool designed for studying scholarly literature trends. The workflow involves analytic tasks and a variety of configurations, emphasizing the importance of understanding underlying concepts and principles. The tool is unique for linking publications with grants, patents, clinical trials, datasets, and policy papers, offering a holistic research landscape.
OpenAIRE providesn a large database of research data that is stored in a graph format (that represents relationships between research outputs, citations, funding, organizations and collaborations). Used for research evaluation in replacement of proprietary databases such as Web of Science or Scopus.
Analyses the textual context of citations, distinguishing between supporting, mentioning, and contrasting citations. Processes full-text articles to create ‘Smart Citations’, which contain information about citation relationships, contextual details, and classification types. Also offers Custom Dashboards, a Zotero Plugin, and a Browser Extension. Sources papers from publishers, preprint servers, and other reputable sources, accessing full-text PDFs and XMLs for analysis.
In our Scientific Theories section, you'll find must-have resources for getting to grips with the essence of scientific exploration. You'll find classic reads to enlightening articles, and they all offer insights into theory building and disciplinary perspectives.
Check it out!
The book considers the fundamental question: what is a scientific theory? The book presents a range of options - from theories as sets of propositions, to theories as families of models, abstract artifacts, or fictions - and highlights the various problems they all face.
The study analyzes articles from Psychological Science (2009–2019) to assess the role of theory in psychology. Despite mentioning 359 theories, most are only referenced once, with 53.66% of manuscripts including the term "theory." Only 15.33% explicitly test predictions from theories, indicating potential gaps in cumulative theory building. The findings challenge the assumption that psychological science aligns with a hypothetico-deductive model, prompting questions about the field's reliance on theory. The study underscores the need for a clearer understanding of the role theory plays in shaping psychological research.
- How Computational Modeling Can Force Theory Building in Psychological Science (Guest & Martin, 2020)
The article advocates for the underappreciated value of computational modeling in psychology, asserting its potential to guide transparent theorizing. Computational modeling is seen as essential for conceptual analysis and formalizing intuitions, facilitating the development of open and transparent theories. The authors warn that a reluctance to embrace computational modeling may lead to replicability issues and hinder coherent theory building in psychology. The article introduces scientific inference as a sequential process and highlights the role of computational modeling in enhancing it beyond traditional practices like preregistration. Additionally, it provides practical insights on formalizing and implementing computational models, emphasizing their accessibility and benefits for all researchers.
The article argues for the crucial role of computational modeling in advancing psychological science, emphasizing its potential to shape "open theory" through conceptual analysis. Computational modeling is portrayed as essential for constraining the inference process, aiding in the creation of explanatory and predictive theories. The article predicts potential replicability crises and challenges in theory building if psychology continues to overlook computational modeling. Lack of formal modeling is identified as a barrier to transparent theorizing in psychology. Additionally, the article provides practical insights on formalizing, specifying, and implementing a computational model, stressing its accessibility and benefits for all researchers.
Addressing recent philosophical claims, Peter Vickers examines eight alleged 'inconsistent theories' in the history of science to challenge the oversimplified view that successful theories can tolerate internal inconsistencies. Vickers argues that labeling theories as 'inconsistent science' may depend on reconstructions that deviate from the actual history of science. Genuine inconsistency, when present, demands a context-specific understanding and response. The author cautions against overly general claims about science, highlighting the diverse nature of entities labeled as 'theories' with unique workings and responses to inconsistency. Vickers advocates for a particularist philosophy of science, encouraging an appreciation of the dramatic differences between identified 'inconsistencies in science.'
A century ago, Einstein's distinction between theories of principle and constructive theories highlighted their unique roles, relationships with data, and developmental methods. The article delves into Einstein's model of theory formation, using the distinction to analyze scientific practice in psychology. Recent discussions in psychology advocate for increased theoretical sophistication, aligning with Einstein's view. The authors argue for the value of this distinction, emphasizing the need for a renewed commitment to a natural history of psychology. In psychology, the shift toward theoretical development is deemed essential alongside methodological sophistication.
The article explores the impact of scientific theories, from foundational ones like relativity to emerging disciplines such as cognitive science and GIS. It thoroughly examines the structure of scientific theories through the Syntactic, Semantic, and Pragmatic Views, addressing their composition, function, and relationship to the world. The Syntactic View characterizes theory structure as an uninterpreted axiomatic system, while the Semantic View involves state-space and model/set-theoretic elements. The Pragmatic View introduces internal and external pluralism in theory structure, highlighting the influence of practice, function, and application. Although these views are often perceived as mutually exclusive, the article suggests that they can be complementary, offering diverse insights into the intricate nature of scientific theories.
The paper presents a framework to delineate the epistemic roles of data and models in empirical inquiry, critiquing Suppes' characterization for its inadequacy in explaining their roles in scientific practice. Using a case study in plant phenotyping, the author illustrates the difference between practices aiming to make data usable as evidence and those intending to use data to represent specific phenomena. The argument contends that the classification of objects as data or models is not contingent on intrinsic differences in physical properties, abstraction levels, or human intervention but on their distinctive roles in identifying and characterizing targets of investigation. The proposed characterization builds on Suppes' focus on data practices, avoiding the need for a rigid hierarchy or restrictive definition of data models as statistical constructs. The framework contributes to a nuanced understanding of the interplay between data and models in scientific inquiry.
A collection of eight philosophical essays that delve into the theoretical practices of physics. The initial two essays scrutinize these practices within physicists' treatises and journal articles, treating them as texts and positioning the philosopher of science as a critic. Subsequent essays address a spectrum of concerns in the intersection of philosophy and physics, encompassing topics such as laws, disunities, models and representation, computer simulation, explanation, and the discourse of physics. Hughes's work offers a critical examination of theoretical practices, providing insights into the complex relationship between philosophy and physics.
The text discusses the importance of theory and mathematical models in biology, emphasizing their role in explaining natural phenomena and making predictions. It traces the history of theoretical biology, highlighting major breakthroughs such as the theory of evolution by natural selection. The divide between theoretical and empirical biologists is acknowledged, and suggestions are provided to bridge this gap, including enhancing the mathematical training of biology students and improving communication between theorists and experimentalists. The text concludes by encouraging the submission of theoretical and modeling papers to scientific journals like eLife.
Lindley Darden, critiques the historical treatment of theory construction in philosophy of science, highlighting the focus on justification rather than discovery. The paper argues for a more comprehensive understanding of the ongoing process of theory construction, emphasizing that theories rarely emerge fully formed and discussing the interplay between discovery and justification. Darden identifies factors influencing theory construction, such as the domain to be explained, and explores the need for new ideas, introducing the role of analogies and interfield connections in providing these novel concepts. The historical case study presented suggests that connections to well-developed related fields may be a superior source of new ideas compared to analogies. The text criticizes the lack of attention to theory construction in philosophical discourse and aims to address this gap.
Discusses the goal of Artificial Intelligence in identifying and solving tractable information processing problems. It introduces two types of theory, labeled as Types 1 and 2, and outlines their characteristics. The text aims to provide a rigorous perspective on the subject, shedding light on past work and briefly reviewing future prospects in the field of AI.
The text discusses challenges in applying machine learning to neuroimaging in cognitive neuroscience, referring to them as "ghosts." These challenges include source ambiguity in decoding brain data, issues in moving from data to stable phenomena, and the difficulty in interpreting neural representations. The text acknowledges the value of machine learning but emphasizes the need to address these challenges for a clearer understanding of brain computation and representation.
The article discusses the pivotal role of ontologies in cognitive science, serving as computable representations that aid in evidence aggregation and the resolution of theoretical debates. Through the explicit definition of entities across different theories, ontologies establish connections, illustrated by the example of 'perceived control' encompassing entities from various theories. The comparability of theories hinges on addressing identical entities, determining congruence or contradiction. Drawing parallels with successful examples from the natural sciences, the article advocates for integrating overarching theories in cognitive science. However, adopting theories in behavioral sciences poses a challenge due to numerous competing alternative entities, necessitating a principled approach. The suggested integrative approach based on ontologies underscores the importance of explicitly defining entities and relations for empirical annotation, striving towards a cumulative science.
The author explores twelve major virtues of good theories, categorizing them into four classes: evidential, coherential, aesthetic, and diachronic. These virtues include evidential accuracy, causal adequacy, explanatory depth, internal consistency, internal coherence, universal coherence, beauty, simplicity, unification, durability, fruitfulness, and applicability. The classification system follows a pattern of progressive disclosure and expansion within each class. The article aims to clarify the virtues and propose their coordinated and cumulative role in theory formation and evaluation across disciplines. The author argues for an informal and flexible logic of theory choice, emphasizing the multifaceted relationships among the virtues. This systematization provides resources for future prescriptive studies and potential collaboration among logicians, epistemologists, and practitioners across disciplines.
This paper advocates for a model-theoretic approach to comprehending the nature of scientific theories, drawing connections between philosophers' model-theoretic accounts and cognitive scientists' categorization concepts. The author proposes a more intricate structure for families of models, contrasting common assumptions. Using classical mechanics as an illustration, the paper suggests mapping families of models with "horizontal" graded structures, multiple hierarchical "vertical" structures, and local "radial" structures. These proposed structures have significant implications for the learning and application of scientific theories in real scientific practice.
The book covers various aspects of theory construction, including defining concepts, formulating hypotheses, designing experiments, collecting and analyzing data, and revising theories based on empirical findings. It also explores different theoretical frameworks and methodologies commonly used in cognitive science, such as computational modeling, neuroscience, psychology, linguistics, and philosophy of mind. Throughout the text, Hardcastle emphasizes the importance of interdisciplinary collaboration and critical thinking in theory development. She highlights the challenges and pitfalls that researchers may encounter during the process and provides practical strategies for overcoming them.
- [ Automatic ontology construction from text: a review from shallow to deep learning trend (Al-aswadi et al., 2020) ] (https://www.researchgate.net/publication/337112076_Automatic_ontology_construction_from_text_a_review_from_shallow_to_deep_learning_trend)
The paper explores the field of automatic ontology construction from textual data on the web, driven by the need to promote the semantic web. Ontology learning (OL) from text is the process of extracting and representing knowledge in machine-readable form. Ontologies play a crucial role in enhancing knowledge representation on the semantic web, benefiting applications like information retrieval, extraction, and question answering. Manual ontology construction is time-consuming and costly, leading to the development of various automated approaches and systems. The paper reviews these approaches, systems, and challenges, highlighting the shift from shallow learning to deep learning techniques for future ontology construction enhancements. The introduction emphasizes the significance of ontologies in the semantic web and addresses the need for efficient and automated construction processes.
This article emphasizes the importance of combining reliable methods with rigorous theory in scientific research. It introduces the concept of Action Identification, suggesting psychological tension between focusing on methodological details and noticing broader connections. The paper proposes a technique called theory mapping to visually outline theories, providing specificity and synthesis. Theory mapping involves five elements and is illustrated using examples from moral judgment and cars. The technique is seen as a valuable resource for the scientific community, offering benefits such as specificity, preventing redundancies, increasing validity and reliability, and aiding in theory evaluation. The article concludes that while psychology has focused on methodological reliability, there is a need to improve the rigor of theory, and theory mapping serves as a useful tool for connecting ideas and building knowledge structures. The article also highlights theory maps available on www.theorymaps.org, showcasing various psychological phenomena mapped by experts.
Suppe delves into the fundamental concept of scientific theory within the philosophy of science. He offers a comprehensive analysis and clarification of the term "scientific theory" by exploring its various dimensions and characteristics. Through a thorough examination of historical and contemporary examples from different scientific disciplines, Suppe outlines the essential components that distinguish scientific theories from mere hypotheses or empirical generalizations. He emphasizes the role of scientific theories in organizing and explaining empirical data, predicting future observations, and guiding scientific inquiry. Furthermore, Suppe addresses common misconceptions and challenges associated with the understanding of scientific theories, including the distinction between theories and laws, the role of evidence and falsifiability, and the dynamics of theory change and revision.
McMullin explores the qualities that characterize an exemplary scientific theory. Drawing from the philosophy of science, McMullin argues that a good theory possesses several virtues that contribute to its explanatory power, predictive accuracy, and overall scientific merit. Through a detailed examination of historical and contemporary examples across various scientific disciplines, McMullin identifies key virtues such as coherence, simplicity, explanatory depth, empirical adequacy, and fruitfulness. He elucidates how these virtues interact and complement each other, shaping the development and evaluation of scientific theories. Moreover, McMullin discusses the role of values and pragmatic considerations in assessing the virtues of theories, highlighting the importance of epistemic and social factors in scientific inquiry.
Craver presents a comprehensive analysis of the conceptual frameworks that underlie scientific theories. Focusing on the structure and organization of scientific theories across various disciplines, Craver explores how theories are constructed, articulated, and refined to explain empirical phenomena. Through a combination of historical insights and contemporary case studies, he elucidates the diverse forms and components of scientific theories, including laws, models, concepts, and hypotheses. Craver also discusses the role of theoretical structures in guiding scientific inquiry, facilitating empirical research, and fostering interdisciplinary collaborations. Furthermore, he examines the dynamic nature of scientific theories, emphasizing their capacity for evolution and adaptation in response to new evidence and theoretical advances.
The paper advocates for an explicit conceptual framework in biology, proposing new overarching theories for cells, organisms, and genetics, inspired by the theory of evolution. This framework aims to accelerate scientific progress, reveal connections in biology, and offer insights into theory structures. The author suggests its application as an educational tool to transform biology teaching. The paper encourages a broader discussion within the biological community about the nature and structure of theories.
The author delves into the burgeoning field of theoretical neuroscience, examining its origins, methodologies, and contributions to our understanding of the brain. Abbott provides a comprehensive overview of theoretical approaches used to study neural systems, including computational modeling, mathematical analysis, and theoretical frameworks borrowed from physics and engineering. Through a synthesis of experimental findings and theoretical insights, he demonstrates how theoretical neuroscience has advanced our understanding of complex neural phenomena such as perception, learning, and memory. Additionally, Abbott explores the interdisciplinary nature of theoretical neuroscience, highlighting collaborations between neuroscientists, mathematicians, computer scientists, and physicists. By elucidating the role of theory in neuroscience, Abbott argues for the integration of theoretical and experimental approaches to address fundamental questions about brain function and dysfunction. Overall, "Theoretical Neuroscience Rising" offers a compelling narrative of the evolution of theoretical neuroscience and its pivotal role in shaping our understanding of the brain.
This article presents a systematic and practical approach to theory formation within the field of psychology. The authors introduce a comprehensive methodology that guides researchers through the process of constructing, refining, and evaluating theories to explain psychological phenomena. Drawing from philosophical and methodological insights, as well as empirical examples from psychology and related disciplines, the book offers clear guidelines for defining constructs, formulating hypotheses, designing empirical studies, and assessing the validity of theoretical propositions. By emphasizing the importance of rigorous theory construction, Borsboom et al. underscore the significance of theory-driven research in advancing scientific knowledge and understanding human behavior. Moreover, the authors address common challenges and misconceptions surrounding theory construction, providing researchers with practical tools and strategies to enhance the quality and coherence of their theoretical frameworks.
The authors argue for a fundamental shift in epistemological paradigms to foster progress in theory development across disciplines. Through a critical examination of prevailing epistemological assumptions and practices within academia, Rooij and Giosuè Baggio advocate for a more holistic and integrative approach to theory construction. They highlight the limitations of reductionist and positivist approaches in addressing the complexities of contemporary research questions, particularly in fields such as cognitive science, psychology, and social sciences. Drawing from insights in philosophy of science and complexity theory, the authors propose an epistemological framework that emphasizes the interconnectedness of phenomena, the role of context and emergence, and the integration of multiple levels of analysis. By embracing this epistemological sea change, Rooij and Giosuè Baggio argue that researchers can develop more robust and explanatory theories that capture the dynamic and multidimensional nature of reality
Researchers display confirmation bias when they persevere by revising procedures until obtaining a theory-predicted result. This strategy produces findings that are overgeneralized in avoidable ways, and this in turn hinders successful applications. (The 40-year history of an attitude-change phenomenon, the sleeper effect, stands as a case in point.) Confirmation bias is an expectable product of theory-centered research strategies, including both the puzzle-solving activity of T. S. Kuhn's "normal science" and, more surprisingly, K. R. Popper's recommended method of falsification seeking. The alternative strategies of condition seeking (identifying limiting conditions for a known finding) and design (discovering conditions that can produce a previously unobtained result) are result-centered; they are directed at producing specified patterns of data rather than at the logically impossible goals of establishing either the truth or falsity of a theory.
The author explores strategies for generating innovative hypotheses in psychological research. Drawing upon insights from cognitive psychology and creative thinking, McGuire presents a collection of heuristic techniques aimed at stimulating novel ideas and hypotheses in the realm of psychology. Through a blend of theoretical discussion and practical examples, McGuire elucidates various heuristic approaches, including analogical reasoning, counterfactual thinking, and divergent thinking. He emphasizes the importance of flexibility, openness to unconventional ideas, and willingness to challenge established assumptions in the hypothesis generation process. Furthermore, McGuire discusses how these heuristic strategies can be integrated into research practices to inspire creativity and innovation, ultimately enriching the theoretical landscape of psychology.
The authors explore the intersection between rational and empirical epistemologies, emphasizing the need for integration and synergy between these two approaches within the discipline. Through a synthesis of philosophical analysis and empirical evidence, the paper elucidates the strengths and limitations of both rational and empirical methods in advancing theoretical frameworks in psychology. Furthermore, the authors propose conditions under which theoretical psychology can thrive, including a commitment to interdisciplinary collaboration, methodological pluralism, and the adoption of integrative epistemological frameworks. By fostering dialogue and cooperation between rationalist and empiricist perspectives, the paper advocates for a more robust and dynamic theoretical landscape in psychology, capable of addressing complex and multifaceted phenomena.
This paper brings together the relevant literature in several areas of psychology with respect to the interdisciplinary domain of psychological epistemology. It attempts to do so in a selective and interpretive manner. Five substantive research areas are examined with respect to their contribution toward the understanding of the philosophical problems of knowing: perception, thinking, intuition, symbolizing, and developmental psychology. The classical epistemological topics of realism, idealism, empiricism, rationalism, intuitionism, and others are examined in terms of the psychological contribution from the fields of perception, thinking, intuiting, symbolizing, and developmental studies of thinking and language acquisition. 13 epistemic issues are identified, and the empirical conclusions relevant to their eventual resolution are examined.
Introduces the idea of an epistemic triangle, with factual, theoretical, and conceptual investigations at its vertices, and argues that whereas scientific progress requires a balance among the 3 types of investigations, psychology's epistemic triangle is stretched disproportionately in the direction of factual investigations. Expressed by a variety of different problems, this unbalance may be created by a main operative theme—the obsession of psychology with a narrow and mechanical view of the scientific method and a misguided aversion to conceptual inquiries. Hence, to redress psychology's epistemic triangle, a broader and more realistic conception of method is needed and, in particular, conceptual investigations must be promoted. Using examples from different research domains, the authors describe the nature of conceptual investigations, relate them to theoretical investigations, and illustrate their purposes, forms, and limitations.
Navarro offers a critical examination of the role of mathematical psychology in the advancement of theory building within the field. Through a blend of theoretical analysis and empirical examples, Navarro argues that mathematical psychology plays a crucial role in fostering precision, coherence, and rigor in psychological theories. By leveraging mathematical tools and formal models, researchers can more effectively articulate and test theoretical propositions, leading to a deeper understanding of psychological phenomena. Navarro also discusses the challenges and opportunities associated with integrating mathematical approaches into psychological research, highlighting the importance of interdisciplinary collaboration and methodological innovation.
This article documents two facts that are provocative in juxtaposition. First: There is multidecade durability of theory controversies in psychology, demonstrated here in the subdisciplines of cognitive and social psychology. Second: There is a much greater frequency of Nobel science awards for contributions to method than for contributions to theory, shown here in an analysis of the last two decades of Nobel awards in physics, chemistry, and medicine. The available documentation of Nobel awards reveals two forms of method–theory synergy: (a) existing theories were often essential in enabling development of awarded methods, and (b) award-receiving methods often generated previously inconceivable data, which in turn inspired previously inconceivable theories. It is easy to find illustrations of these same synergies also in psychology. Perhaps greater recognition of the value of method in advancing theory can help to achieve resolutions of psychology’s persistent theory controversies.
- Theory construction in the psychopathology domain: A multiphase approach
Theory construction in the psychopathology domain is a multifaceted and complex endeavor that requires a systematic and rigorous approach. This paper proposes a multiphase methodology for theory construction in psychopathology, aiming to provide researchers with a structured framework to guide their theoretical endeavors. Drawing upon insights from various disciplines including psychology, psychiatry, and neuroscience, the proposed approach involves several key phases: (1) Conceptualization and Literature Review, (2) Hypothesis Generation, (3) Model Development, (4) Empirical Testing, and (5) Revision and Refinement.
-
Psychology: a giant with a feet of clay (Zagaria, Ando & Zennaro, 2020)
-
Psychology: a giant with a feet of clay (Zagaria, Ando & Zennaro, 2020)
-
On the role of theory and modeling in neuroscience (Levenstein, 2020)
-
Theoretical Virtues in Science OUP Bibliographies (Schindler, 2020)
-
Gender, politics, and the theoretical virtues (Longino, 1995)
-
Theoretical virtues in eighteenth-century debates on animal cognition (Hein van den Berg, 2020)
The paper investigates how corpus linguistics methodologies can inform and enrich philosophical inquiries within the realm of science. Through a multidisciplinary approach, the paper delves into the application of corpus linguistic techniques to analyze scientific texts, including research articles, textbooks, and academic discourse. The author demonstrates how corpus linguistics can offer valuable insights into the language and communication practices of scientists, shedding light on the development, dissemination, and evolution of scientific knowledge. Moreover, the paper explores how corpus linguistic analyses can contribute to philosophical debates surrounding scientific realism, theory construction, and the nature of scientific explanation.
- American psychiatry in the new millennium: a critical appraisal (Scull, 2021)
- Digital Literature Analysis for Empirical Philosophy of Science (Lean, Rivelli & Pence, 2021)
Empirical philosophers of science aim to base their philosophical theories on observations of scientific practice. But since there is far too much science to observe it all, how can we form and test hypotheses about science that are sufficiently rigorous and broad in scope, while avoiding the pitfalls of bias and subjectivity in our methods? Part of the answer, we claim, lies in the computational tools of the digital humanities (DH), which allow us to analyze large volumes of scientific literature. Here we advocate for the use of these methods by addressing a number of large-scale, justificatory concerns—specifically, about the epistemic value of journal articles as evidence for what happens elsewhere in science, and about the ability of DH tools to extract this evidence. Far from ignoring the gap between scientific literature and the rest of scientific practice, effective use of DH tools requires critical reflection about these relationships.
- The Neuroscientification of Psychology: The Rising Prevalence of Neuroscientific Concepts in Psychology From 1965 to 2016
- Using text analysis to quantify the similarity and evolution of scientific disciplines ( Dias, Gerlach, Scharloth & Altmann, 2018)
The authors used information-theoretic measure of linguistic similarity to investigate the organization and evolution of scientific fields. An analysis of almost 20 M papers from the past three decades reveals that the linguistic similarity is related but different from experts and citation-based classifications, leading to an improved view on the organization of science. A temporal analysis of the similarity of fields shows that some fields (e.g. computer science) are becoming increasingly central, but that on average the similarity between pairs of disciplines has not changed in the last decades. This suggests that tendencies of convergence (e.g. multi-disciplinarity) and divergence (e.g. specialization) of disciplines are in balance.
- Phylomemetic Patterns in Science Evolution—the Rise and Fall of Scientific Fields ( Chavalarias, 2013)
- Adversarial alignment enables competing models to engage in cooperative theory building toward cumulative science (Ellemers et al., 2020)
- Theory choice, non-epistemic values, and machine learning (Dotan, 2020)
The paper investigates how non-epistemic values, such as ethical considerations, societal impacts, and economic interests, influence the selection and evaluation of theoretical frameworks in the context of machine learning research and practice. Through a philosophical analysis grounded in the philosophy of science and ethics, Dotan examines the implications of value-laden decision-making processes for the development and application of machine learning models. The author argues that while traditional epistemic criteria play a crucial role in theory choice, non-epistemic values often shape researchers' preferences, priorities, and interpretations of empirical evidence. Moreover, the paper discusses the ethical challenges and dilemmas arising from the integration of non-epistemic values into machine learning processes, including concerns related to bias, fairness, accountability, and transparency. By addressing these issues, Dotan's work contributes to a deeper understanding of the complex interplay between epistemic and non-epistemic factors in theory choice within the domain of machine learning, ultimately informing ethical decision-making and responsible innovation in artificial intelligence technologies.
- Studying grant decision-making: a linguistic analysis of review reports
The study employs corpus linguistics techniques to analyze a large dataset of review reports from grant applications, exploring patterns of language use and rhetorical strategies employed by reviewers. Through quantitative and qualitative analyses, the authors uncover recurring themes, evaluative criteria, and linguistic markers associated with successful and unsuccessful grant proposals. Moreover, the study investigates how linguistic features such as tone, clarity, and argumentation style influence reviewers' assessments and decision-making processes. The findings shed light on the complex dynamics of grant review processes, highlighting the role of language in shaping evaluative judgments and funding outcomes.
This commentary addresses the course-of-experience method in the context of calls for improved theorising in psychological science, and in particular the prospect of applying distinctive means of analysis to examine patterns and variance both between people and across contexts. Psychology can benefit by the development of both theories of principle (formal accounts of the structure of phenomena) and constructive theories (formal accounts of the mechanics of phenomena). The course-of-experience method can provide a useful step toward the development of both. Resonances with other work grounded in naturalistic observation identify potential questions that remain as to how the course-of-experience method can address questions about the relationship between individual and collective aspects of experience, but the technique represents a significant boon to the future development of valid cognitive science.
Research in experimental philosophy has increasingly been turning to corpus methods to produce evidence for empirical claims, as they open up new possibilities for testing linguistic claims or studying concepts across time and culture. The present article reviews the quasi-experimental studies that have been done using textual data from corpora in philosophy, with an eye for the modeling and experimental design that enable statistical inference. I find that most studies forego comparisons that could control for confounds, and that only a little less than half employ statistical testing methods to control for chance results. Furthermore, at least some researchers make modeling decisions that either do not take into account the nature of corpora and of the word-concept relationship, or undermine the experiment's capacity to answer research questions. I suggest that corpus methods could both provide more powerful evidence and gain more mainstream acceptance by improving their modeling practices.
The paper delves into the status of cognitive values within scientific inquiry, particularly concerning their interaction with theories, methods, and values. It examines how these cognitive values, such as simplicity or generality, contribute to judgments about the worth of scientific models, theories, and hypotheses. Additionally, it explores Larry Laudan's perspective from "Science and Values," which suggests that cognitive values are part of a broader system that includes theories, methods, and values, with their interrelations being subject to rational critique. The central focus lies in questioning the distinction between cognitive and non-cognitive aspects within this framework.
The paper challenges the prevailing notion that experiments are the primary means for establishing strong causal inferences in behavioral sciences. The authors argue against the overemphasis on experiments, highlighting their limitations and advocating for the incorporation of other methods such as quasi-experimental and nonexperimental approaches. They discuss the weaknesses of experiments, including issues related to external, construct, statistical-conclusion, and internal validity, as well as problems with replication and conceptual oversimplification. Furthermore, they emphasize the importance of alternative methods in elucidating causal mechanisms and understanding complex interactions in dynamic systems. While acknowledging the value of experiments, the authors stress the need for a balanced approach that integrates various research methods to achieve scientific progress effectively.
The article examines a number of prominent trends in the conduct of psychological research and considers how they may limit progress in our field. Failure to appreciate important differences in temperament among researchers, as well as differences in the particular talents researchers bring to their work, has prevented the development in psychology of a vigorous tradition of fruitful theoretical inquiry. Misplaced emphasis on quantitative “productivity,” a problem for all disciplines, is shown to have particularly unfortunate results in psychology. Problems associated with the distorting effects of seeking grant support are shown to interact with the first two difficulties. Finally, the distorting effects of certain kinds of experimental studies are discussed, together with their implications for progress in this field.
The paper explores the role of theoretical virtues in theory acceptance and belief within both scientific and philosophical contexts. It presents findings from a quantitative study involving scientists from natural and social sciences, comparing their perspectives on theoretical virtues with those of philosophers. Surprising results include the existence of preference orders regarding theoretical virtues among all three groups, indicating a more determinate process of theory choice than previously thought. Additionally, similarities in preference orders are observed across the groups, suggesting a commonality in their views. Notably, social scientists emphasize simplicity as an epistemic virtue, contrasting with philosophers' perspectives. Furthermore, all groups prefer syntactic parsimony over ontological parsimony.
The paper examines overlay journals, which publish articles on open access repositories starting as preprints before peer review. It identifies 34 active overlay journals and notes their focus on natural sciences, mathematics, and computer sciences. Run by academic groups, these journals often rank well in traditional citation metrics and don't charge author fees. The study suggests the wider impact of overlay journals depends on the growth of open access preprint repositories and researchers' willingness to publish in them.
The paper investigates metaphilosophical questions regarding the nature and methods of philosophy compared to science. Drawing on the epistemology of logic literature, the study empirically examines whether philosophy is exceptional and if its methods are continuous with those of science. Specifically, the authors test metaphilosophical hypotheses such as deductivism, inductivism, and abductivism using a large corpus of philosophical texts from the JSTOR database. By analyzing argument types (deductive, inductive, and abductive), the study finds that while deductive arguments are prevalent in philosophical texts, there is a gradual increase in the use of non-deductive (inductive and abductive) arguments in academic philosophy over time.
- The shifting prevalence of conflict in psychoanalytic literature: A brief report of a corpus-based text analysis
- See further upon the giants: Quantifying intellectual lineage in science
The paper explores the concept of "giants" in scientific research, inspired by Newton's notion of standing on the shoulders of giants. It aims to identify the significant prior works cited by a discovery and determine which one can be considered its "giant." Developing a discipline-independent method, the study finds that most papers build upon prior works, with a small percentage being identified as giants. A new measure called the "giant index" is introduced, showing that papers with high citations are more likely to be giants and predicting a paper's future impact and likelihood of winning prizes. Giants can emerge from both small and large teams and are characterized as either highly disruptive or highly developmental. Papers without giants tend to perform poorly, but those that later become giants for other papers are often highly disruptive breakthrough
This open-access book focuses on three core themes central to Leydesdorff's research: (1) the dynamics of science, technology, and innovation; (2) the operationalization of these concepts through scientometrics; and (3) the elaboration of university-industry-government relations, known as the Triple Helix model. The study explores the interconnections among these themes, utilizing Luhmann's social-systems theory and Shannon's information theory. It demonstrates how synergy can enhance an innovation system by introducing new options as redundancy, emphasizing the importance of developing new options for innovation rather than relying solely on past performance. The capacity to anticipate future states enhances the anticipatory nature of a knowledge-based system. The study introduces a Triple-Helix synergy indicator to measure the trade-off between future and historical developments, exemplified through analysis of the Italian national and regional systems of innovation.
The paper explores the significance of tool development in scientific change, an aspect often overshadowed by discussions of theoretical advancements. It emphasizes how scientists utilize tools in exploratory experiments to generate new concepts. Through analysis of two cases in neuroscience, it demonstrates how tool development and concept formation are interconnected in instances of tool-driven change. The paper proposes common normative principles that determine the success of exploratory concept formation and tool development in initiating scientific change.
The book explores what constitutes a good scientist or engineer and how the use of new technologies and work in research labs influence our thinking and behavior. It delves into the ethical dilemmas posed by modern scientific research and technology and suggests that discussions of virtue from ancient philosophical and religious traditions can offer valuable insights. By gathering perspectives from various disciplines, the volume demonstrates how concepts of virtue can enhance our understanding, construction, and utilization of modern science and technology.
The paper discusses the contentious issue of whether ambiguity, specifically polysemy (multiple meanings for words), is productive in data science. It challenges the common assumption that polysemy undermines reasoning and communication, presenting arguments from historians, philosophers, and social scientists who suggest its generative value. Recent quantitative findings from linguistics also support the idea that polysemy can improve human communication efficiency. The paper introduces a new conceptual typology synthesizing prior research on polysemy's aims, norms, and circumstances, proposing a contextual pluralist perspective on its value in scientific research practices. It also suggests that investigating historical patterns of partial synonyms usage could offer valuable insights into these issues.
The paper discusses the pervasive issue of conceptual ambiguity in psychology and argues for the necessity of better conceptualization to advance the field as a science. It highlights how conceptual unclarity permeates various aspects of psychological research, from everyday concepts to statistical measures. Despite its importance, conceptual clarification plays a marginal role in psychological research. The paper emphasizes that concepts are fundamental to theories, methods, and data, and lack of clarity in concepts hampers scientific progress. It proposes strategies to improve conceptual clarity in psychology and illustrates the consequences of ambiguity using examples from research on friendship, psychopathological symptoms, and centrality.
The paper delves into the role of epistemic virtue in determining the success or failure of science as a predominantly epistemic social institution. It examines structural explanations commonly used to account for the success of science while ruling out attributions to individual scientists' virtues. Using the credibility crisis in the social and behavioral sciences as a case study of collective epistemic vice, the paper challenges the notion that individual virtue is neither necessary nor sufficient for scientific success. Instead, it argues that the presence of a significant number of epistemically virtuous scientists is crucial for collective epistemic success in scientific communities, despite divergent motivations and behaviors that may also serve collective scientific goals.
The paper challenges the common assumption that scientific induction, where researchers generalize from study samples to larger populations, is a voluntary cognitive process. Instead, it proposes a novel account suggesting that scientific induction involves a default generalization bias. This bias operates automatically and often leads researchers to unintentionally generalize their findings without sufficient evidence, resulting in overgeneralized conclusions. The paper integrates various findings from cognitive sciences to support this account and argues that addressing the generalization bias is essential to tackle the replication crisis in the sciences. It calls for cognitive debiasing strategies to supplement existing interventions aimed at improving scientific practices.
The paper explores how collective phenomena like language and legal systems emerge from interactions between individual human minds, using a cognitive science perspective. It suggests that these phenomena can be seen as types of distributed computation, where groups of people collectively solve complex problems beyond any individual's capacity. It discusses various aspects of distributed computation in social contexts and proposes that understanding these processes computationally can offer insights into social complexity and cultural evolution. The paper concludes by highlighting the potential for a new revolution in cognitive science, emphasizing the role of individuals in contributing to collective computations.
The book provides a comprehensive overview of the emerging field known as the "science of science," which utilizes big data to uncover patterns governing scientific careers and the scientific process itself. It delves into various aspects such as the roots of scientific impact, productivity, creativity, effective collaborations, and the impact of failure and success in scientific careers. By relying on data-driven insights, the book offers actionable recommendations for individuals seeking to advance their careers and for decision-makers aiming to enhance the role of science in society. Accessible to scientists, graduate students, policymakers, and administrators, the book offers detailed explanations and anecdotes to make complex research easily understandable.
The book is a practical guide to the methods used in digital humanities, offering step-by-step instructions for studying, interpreting, and presenting cultural material and practices. It covers both using computing for new insights into culture and applying humanities methods to understand new technologies. Each chapter provides guidance on methodologies, ethical considerations, practical procedures, and effective presentation of work. It helps students develop practical skills and understandings of digital tools, fostering collaboration and contributions to scholarly and public discourse.
The paper proposes a method for determining if a theoretically predicted effect matches an observed effect, using a simple similarity measure. It discusses the application of this measure in various research designs and estimates the necessary sample size for a given observed effect using computer simulations. The main example involves applying this measure to recent meta-analytical research on precognition, indicating that the evidential basis is too weak for a predicted precognition effect. Additional examples include applying the measure to experimental data from dissonance theory, crowdsourcing hypothesis tests, and meta-analytical data on personality traits and life outcomes correlation.
The paper introduces a methodology to quantify interdisciplinary aspects of research in cognitive science. It proposes models for text similarity analysis using the Doc2Vec method to understand the relationship between publications and specific research fields. The findings suggest that cognitive science collaborates closely with disciplines like psychology, philosophy, linguistics, and computer science, but has limited engagement with anthropology and neuroscience. Overall, the study highlights the interdisciplinary nature of cognitive science over the past few decades.
This paper utilizes corpus analytical methods on large English corpora to investigate the lemmas "justify_v" and "justified_j." The author raises a challenge to the folk justification approach, suggesting that there is not a widely used ordinary notion of justification that attaches to beliefs. The findings pose a challenge to the idea that "justify" is commonly used to talk about the justification of beliefs. The author concludes by presenting possible solutions to this challenge and discussing their feasibility. The paper highlights the potential of corpus analysis to raise philosophical questions and suggests that it can also be a tool to resolve them.
The book argues for the epistemic superiority of simpler theories in philosophy. Bradley contends that simplicity serves as a crucial criterion for theory evaluation, promoting greater explanatory power, predictive accuracy, and overall epistemic economy. Drawing from philosophical traditions and empirical evidence, the author examines various formulations of simplicity and their implications for theory choice in philosophy. Through a critical analysis of competing theories and conceptual frameworks, Bradley demonstrates how simplicity enhances theoretical coherence, conciseness, and explanatory scope. Moreover, the paper explores the implications of preferring simpler theories for philosophical methodology, foundational debates, and the pursuit of truth. By advocating for the epistemic virtues of simplicity, Bradley's work contributes to a deeper understanding of theory evaluation in philosophy and offers insights into the principles guiding rational inquiry and knowledge acquisition.
Scientists often diverge widely when choosing between research programs. This can seem to be rooted in disagreements about which of several theories, competing to address shared questions or phenomena, is currently the most epistemically or explanatorily valuable—i.e. most successful. But many such cases are actually more directly rooted in differing judgments of pursuit-worthiness, concerning which theory will be best down the line, or which addresses the most significant data or questions. Using case studies from 16th-century astronomy and 20th-century geology and biology, I argue that divergent theory choice is thus often driven by considerations of scientific process, even where direct epistemic or explanatory evaluation of its final products appears more relevant. Broadly following Kuhn's analysis of theoretical virtues, I suggest that widely shared criteria for pursuit-worthiness function as imprecise, mutually-conflicting values. However, even Kuhn and others sensitive to pragmatic dimensions of theory ‘acceptance’, including the virtue of fruitfulness, still commonly understate the role of pursuit-worthiness—especially by exaggerating the impact of more present-oriented virtues, or failing to stress how ‘competing’ theories excel at addressing different questions or data. This framework clarifies the nature of the choice and competition involved in theory choice, and the role of alternative theoretical virtues.
Why do bad methods persist in some academic disciplines, even when they have been widely rejected in others? What factors allow good methodological advances to spread across disciplines? In this paper, the authors investigate some key features determining the success and failure of methodological spread between the sciences. The authors introduce a formal model that considers factors like methodological competence and reviewer bias toward one’s own methods. The authors show how these self-preferential biases can protect poor methodology within scientific communities, and lack of reviewer competence can contribute to failures to adopt better methods.
The paper examines how digital humanists bridge the gap between abstract concepts and concrete textual data through systematic operationalization processes. Through a synthesis of theoretical frameworks and practical examples, the authors elucidate the iterative nature of operationalization, encompassing activities such as concept definition, data collection, coding, annotation, and analysis. Moreover, the paper discusses the challenges and opportunities inherent in operationalizing concepts within digital humanities projects, including issues related to interdisciplinarity, data quality, and methodological transparency. By highlighting the centrality of operationalization as a core activity of digital humanities research, this paper contributes to a deeper understanding of the theoretical and methodological foundations of the field and offers insights into best practices for conducting empirical investigations in digital humanities.
The paper investigates the landscape of review articles, delineating various types and elucidating their distinct information retrieval needs. Drawing from interdisciplinary perspectives, the authors delve into the methodologies, purposes, and audiences of different review types. Through a systematic examination, the article illuminates the diverse information retrieval requirements essential for accessing and utilizing review articles effectively. By providing insights into the intricacies of the review family, this study aims to enhance researchers' ability to navigate scholarly literature and optimize their literature search strategies.
The paper explores the cognitive mechanisms underlying explanatory preferences and their implications for decision-making and belief formation. Through a synthesis of empirical research and theoretical frameworks from psychology and cognitive science, Lombrozo elucidates the role of explanatory coherence, simplicity, and causal reasoning in shaping individuals' interpretations of evidence and the construction of mental models. Furthermore, the paper discusses the implications of these findings for educational practices, scientific communication, and public understanding of science.
Through a multidisciplinary approach drawing from psychology, cognitive science, and neuroscience, van Rooij investigates how distractors influence the formation, representation, and retrieval of information in mental models. The paper examines various types of distractors, including irrelevant information, misleading cues, and competing hypotheses, and discusses their effects on attention, memory, and decision-making. Additionally, van Rooij explores theoretical frameworks and empirical findings related to distractor processing, highlighting their relevance for understanding cognitive mechanisms and developing interventions for cognitive enhancement.
Scientists have to decide which of several experiments to carry out. We examine the epistemic viability of experimental choice procedures put forth by science philosophers or carried out by scientists. The three main facets of the scientific method—active experimentation, theorizing, and social learning—are jointly formalized by the authors' multi-agent model of the process. We discover that agents that randomly select fresh experiments come up with the world's most accurate theories. The agents who set out to verify theories, refute theories, or settle theoretical disputes achieve an illusion of epistemic success: they create compelling narratives for the information they gathered, but they entirely fabricate the reality they set out to discover.
Utilizing philosophical and historical viewpoints, Koyré critically analyzes the underlying presumptions and techniques of measurement methods. The paper clarifies the intricacies involved in the measurement process, such as the interaction between theory and observation, the creation of measurement standards, and the epistemological ramifications of measurement uncertainty, through a thorough analysis of historical case studies and theoretical reflections. Furthermore, Koyré delves into the wider implications of measuring as a fundamental idea in the growth of scientific understanding and technological progress.
The article discusses the concept of pseudo-empirical research in psychology, as outlined by Smedslund (1991), which involves empirically testing what can be deduced a priori from everyday psychological terminology. It explores how this perspective aligns with aspects of a theoretical model of narrative form, highlighting the role of "trouble" as a necessary premise for research. However, it notes a paradox: while trouble is essential for narrative and research, it is not feasible in the context of general questions about everyday psychological concepts. The article resolves this paradox by examining methodological and discursive characteristics of research, such as the reification of psychological terminology and statistical analysis, which can obscure the absence of trouble or create the illusion of its presence.
The paper explores the role of values in science, addressing questions regarding their influence, integration into scientific practices, responsible management, and actionable steps for promoting their responsible roles. It critiques the "value-free ideal" in science, advocating for its rejection while also highlighting the need to discern appropriate from inappropriate value influences. Ultimately, it proposes an approach to managing values in science through the establishment and implementation of norms for research practices and institutions.
The paper challenges the prevailing taboo against explicit causal inference in nonexperimental psychology, arguing that it hampers study design, data analysis, and the advancement of knowledge about causal mechanisms. It suggests that psychologists should openly discuss causal assumptions and effects to take advantage of methodological advances in causal reasoning. The authors contend that while correlation does not imply causation, the reluctance to address causal questions implicitly hinders progress in the field and limits its relevance for policymaking. They propose ways for nonexperimental psychologists to incorporate causality into their research more effectively.
The paper systematically reviews critiques of positive psychology, aiming to understand the challenges facing its third wave of research. Analyzing 32 records, it identifies 117 unique criticisms, categorized into 21 groups and six overarching themes. These encompass concerns about theoretical depth, methodological issues, accusations of pseudoscience, lack of novelty, promotion of a harmful ideology, and perceptions of capitalism. The paper reflects on these findings and highlights the opportunities they offer.
In light of methodological and technical developments, Alger addresses issues with the validity, dependability, and usefulness of hypotheses in scientific investigation. The study examines techniques for boosting the robustness and credibility of scientific theories, such as replication studies, open science practices, and rigorous statistical approaches, through a combination of empirical research and theoretical insights. Alger also talks about the possibilities that big data techniques present for developing, evaluating, and improving hypotheses.
Utilizing perspectives from philosophy of science and psychology, the writers contend that the unrelenting hunt for new discoveries has caused research endeavors to become dispersed and neglected in terms of theoretical development and coherence. By critically analyzing current practices and trends in psychology research, the paper draws attention to the negative consequences that novelty-driven techniques have on theoretical integration, scientific progress, and replicability. Moreover, Burghardt and Bodansky suggest useful tactics for encouraging theory-driven research, such as emphasizing cumulative knowledge, collaborating across disciplines, and implementing open and strict scientific procedures.
Smaldino contends that although improvements in research techniques are necessary for empirical investigation, they are insufficient to make up for shortcomings in theoretical frameworks. The study shows how a focus on methodological innovation frequently comes at the expense of theoretical depth and coherence by using examples from a variety of scientific disciplines. In order to properly promote scientific knowledge, Smaldino emphasizes the significance of giving theoretical development and refinement top priority through a critical examination of academic processes and incentives. The report also suggests methods for promoting a theory-driven research culture, including as mentoring, multidisciplinary cooperation, and providing incentives for theoretical contributions.
The paper examines the trends in scholarly citation patterns over time, particularly focusing on the concentration of citations on a narrower range of top papers. It discusses the potential implications of this trend on the circulation of ideas in the sciences. The study finds that while a larger proportion of literature is cited at least a few times, citations are increasingly concentrated at the top of the citation distribution. It suggests that a paper's future importance is increasingly dependent on its past citation performance, indicating a mechanism of cumulative advantage. The evidence presented in the paper indicates that the growing heterogeneity of citation impact restricts the mobility of research articles that do not gain attention early on. The study attributes these trends to the advancement of information technologies for disseminating papers.
The Open Science [OS] movement aims to foster the wide dissemination, scrutiny and re-use of research components for the good of science and society. This Element examines the role played by OS principles and practices within contemporary research and how this relates to the epistemology of science. After reviewing some of the concerns that have prompted calls for more openness, it highlights how the interpretation of openness as the sharing of resources, so often encountered in OS initiatives and policies, may have the unwanted effect of constraining epistemic diversity and worsening epistemic injustice, resulting in unreliable and unethical scientific knowledge. By contrast, this Element proposes to frame openness as the effort to establish judicious connections among systems of practice, predicated on a process-oriented view of research as a tool for effective and responsible agency.
One of the key terms in sociology is "theory." Articles, chapters, and monographs are frequently asked to be "theoretical," "develop theory," or "make a theoretical contribution" by sociologists. However, as shown by Gabriel Abend's 2008 paper "The Meaning of 'Theory,'" sociologists rarely agree on what they mean when they discuss theory. Abend identifies seven distinct interpretations that sociologists typically ascribe to the word "theory" and contends that these substantively diverse meanings cannot be effectively captured by a single definition. In contrast to Abend, we put forth and support a simple yet adaptable theory of theory that does manage to encompass the key elements shared by all of the sociologists' diverse applications of the term theory.
The pursuitworthiness of philosophical concepts has evolved over time, but philosophical technique and practice have not kept up with this shift. A philosophical endeavor is only as worthy as the concepts and goals it pursues, as well as the means by which it is pursued. In this work, we clarify the ways in which empirical approaches advance philosophy of science, especially when it comes to promoting the application of qualitative techniques to comprehend the normative and social dimensions of scientific investigation. We first place qualitative approaches within the context of empirical philosophy of science and then talk about how these conventionally sociological tools might be modified to provide empirical guidance for philosophical issues.
For many years, a number of unsubstantiated, erroneous, or misunderstood ideas have been extensively disseminated in scholarly publications, eventually taking the form of scientific myths. How can these false beliefs endure and spread in the hostile milieu of scholarly criticism? After analyzing 613 articles, the authors show that a "affirmative citation bias" skews the reception of three publications that expose myths: The idea that was criticized will be supported by the great majority of articles that cite the critical article. Forty people expressed a negative position, 105 were neutral, and 468 agreed with the myth. When false beliefs spread widely and for an extended period of time, it becomes harder to refute them and may even encourage the myths to continue.
The paper discusses the rise of the "reproducibility crisis" in science policy and contrasts it with past scientific scandals. It attributes the current crisis's prominence to the efforts of a group of scientific activists called "metascientists." Metascience, as a social movement, focuses on using quantification and experimentation to diagnose research practice problems and enhance efficiency. Comprised of data scientists, methodologists, and open science advocates, metascience has successfully influenced funding, media coverage, and policies in scientific institutions. The paper employs a social movement framework to analyze the spread and impact of the reproducibility crisis narrative and its implications for science institutions.
In philosophy of science, the centrality of epistemic terms like theory, explanation, model, or mechanism is rarely questioned. What practical value do they serve in science, though? The authors use text-mining techniques to examine how 61 epistemic concepts are used in a corpus of full-text articles from the biological and biomedical sciences (N=73,771) as part of this philosophy of science project. By dividing the corpus into sub-disciplinary clusters, the impact of the disciplinary setting is also investigated. The findings show how these notions actually create complex semantic networks in scientific speech, which, at least in some scientific domains, deviates from our intuitions.
How can the explanatory adequacy of theories be verified using data? Large-scale replications have lately entered the scene, but the two most common and conventional methodologies still rely on single studies and non-systematic narrative reviews to assess the explanatory power of theories. According to the authors, none of these methods adheres to the principles of cumulative science. The authors propose instead Community-Augmented Meta-Analyses (CAMAs), which, like meta-analyses and systematic reviews, are built using all available data; like meta-analyses but not systematic reviews, can rely on sound statistical practices to model methodological effects; and like no other approach, are broad-scoped, cumulative and open.
The paper discusses the definition of social science and explores the distinction between two types of theories within the field. It examines how social science can be understood both narrowly, focusing on the structure of societies or cultures, and broadly, encompassing the study of complex human behavior influenced by interpersonal interactions. Additionally, it provides an example of a deductive-nomothetic theory in social science, specifically the theory of the evolution of the Indo-European languages.
The paper critiques empiricist psychology's reliance on quantification as a means to achieve scientific credibility. It argues that this approach neglects subjective phenomena inherent in the study of the mind and contradicts fundamental scientific principles. Specifically, it suggests that psychological theories often lack the capacity for nuanced quantitative prediction beyond simplistic binary outcomes, hindering the ability to discern between competing theories and infer underlying mental mechanisms from experimental data.
The paper explores the ambiguity surrounding the term "theory" in sociology and proposes a minimal and versatile theory of theory. It aims to foster discussion and reflexivity within the field by inviting sociologists of diverse perspectives to engage in dialogue. Using fictional personas of "Gunns" and "Marits," representing different theoretical preferences, the paper encourages collaboration and mutual understanding among sociologists. By establishing a common understanding of theories as sets of assumptions about phenomena, it seeks to advance the field collectively.
The paper proposes a solution to the challenge of distinguishing between "good" and "bad" scholarly journals by applying Imre Lakatos's methodology of scientific research programmes (MSRP). It reviews previous attempts at appraising journal quality and argues for a more nuanced approach that considers the historical evolution of publication practices. The author introduces novel tools, such as the mistake index and scite index, to operationalize aspects of the MSRP. Additionally, the paper advocates for the use of qualitative methods in philosophy of science to analyze scientists' reasoning and contribute to social epistemology. Overall, it calls for expanding the methodology in philosophy of science to include qualitative methods, citing their potential benefits in understanding interpersonal and collective reasoning processes in science.
The paper investigates the extent of dissonance between widely espoused norms of behavior in scientific research and scientists' perceptions of their own and others' behavior. Survey responses from 3,247 mid- and early-career scientists funded by the U.S. National Institutes of Health were analyzed. Significant normative dissonance was found, especially between espoused ideals and respondents' perceptions of other scientists' behavior. Respondents generally viewed other scientists' behavior as more counternormative than normative. The perceived cooperativeness or competitiveness of scientists' fields was associated with their normative perspectives, with more competitive fields exhibiting more counternormative behavior. The study highlights the persistent stress caused by high levels of normative dissonance in science.
The paper provides an overview of how models are utilized to understand scientific practice. It discusses various modeling approaches used by researchers to study different aspects of science, highlighting both straightforward insights and more nuanced observations. The paper argues that while models are valuable tools for comprehending scientific processes, their effectiveness is enhanced when combined with empirical methods and other forms of theorizing.
The paper explores the phenomenon of overpromising in scientific discourse, where unrealistic expectations are raised to gain trust and funding. Drawing on signaling theory, philosophy of promising, and science communication research, the paper presents a conceptualization of overpromising and emphasizes the importance of considering the knowledge context in promise-making. The study suggests that further research is needed to explore broader dimensions and motivations for overpromising, including the promiser's identity, normative dimensions of the promise, and specific contexts. Additionally, understanding why overpromising persists in certain contexts, despite awareness from both promiser and promisee, is highlighted as an area for investigation.
The paper proposes a novel method called TextMatch for evaluating the significance of scientific papers, addressing the limitations of traditional citation counts. TextMatch utilizes high-dimensional text embeddings generated by large language models (LLMs) to encode papers, extracts similar samples using cosine similarity, and synthesizes a counterfactual sample based on the weighted average of similar papers. The resulting metric, called CausalCite, offers a causal formulation of paper citations. The effectiveness of CausalCite is demonstrated through its high correlation with paper impact, as assessed by scientific experts, and its stability across various sub-fields of AI. The study provides insights for future researchers to leverage this metric for a more comprehensive understanding of a paper's quality. Code and data are available for further exploration.
Replication Crisis in Psychology
-
Stepping in the Same River Twice: Replication in Biological Research
-
A discipline-wide investigation of the replicability of Psychology papers over the past two decades
-
No Evidence for a Replicability Crisis in Psychological Science
-
The problem with science: the reproducibility crisis and what to do about it
-
Rethinking Reproducibility as a Criterion for Research Quality
-
Reproducibility failures are essential to scientific inquiry
-
The logical structure of experiments lays the foundation for a theory of reproducibility