Use when you need to connect to the SciGraph SCP server for Material (materials-science knowledge graph built from ~150k papers via an LLM-based NLP pipeline; contains materials, formulas, properties, synthesis conditions) and call its MCP tools (query_cypher, get_kg_statistics, get_entity_details, get_experiment_workflow), including streamableHttp configuration with SCP-HUB-API-KEY and Python 3.10+ usage examples.
Material is a materials-science knowledge graph automatically constructed from ~150,000 peer-reviewed papers using an LLM-based NLP pipeline. It captures structured relationships among entities such as material names, chemical formulas, properties, and synthesis conditions.
https://scp.intern-ai.org.cn/api/v1/mcp/37/SciGraphSCP-HUB-API-KEY: {API-KEY}pip install mcp
{
"mcpServers": {
"SciGraph": {
"type": "streamableHttp",
"description": "这是一款面向科学研究的统一知识查询服务,集成了化学、生物等多个学科领域的知识图谱数据,支持跨学科知识检索、实体关系查询、领域知识问答等操作",
"url": "https://scp.intern-ai.org.cn/api/v1/mcp/37/SciGraph",
"headers": {
"SCP-HUB-API-KEY": "{API-KEY}"
}
}
}
}
Execute a Cypher query and return JSON results.
Arguments:
cypher (string, required)kg_name (string|null, optional, default null)limit (int, optional, default 100)Example arguments (Material):
{
"cypher": "MATCH (e:Experiment:Material) RETURN e.id as experiment_id",
"kg_name": "Material",
"limit": 5
}
Return graph statistics.
Example arguments:
{ "kg_name": "Material" }
Return entity details.
Example arguments:
{ "entity_identifier": "experiment_1", "kg_name": "Material" }
Return the full workflow of an experiment.
Example arguments:
{ "experiment_id": "experiment_1" }
import asyncio
import json
from mcp.client.streamable_http import streamablehttp_client
from mcp.client.session import ClientSession
SERVER_URL = "https://scp.intern-ai.org.cn/api/v1/mcp/37/SciGraph"
async def main():
transport = streamablehttp_client(
url=SERVER_URL,
headers={"SCP-HUB-API-KEY": "sk-xxx"},
)
read, write, get_session_id = await transport.__aenter__()
session_ctx = ClientSession(read, write)
session = await session_ctx.__aenter__()
await session.initialize()
# Example: stats for Material
result = await session.call_tool(
"get_kg_statistics",
arguments={"kg_name": "Material"},
)
data = json.loads(result.content[0].text)
print(data)
await session_ctx.__aexit__(None, None, None)
await transport.__aexit__(None, None, None)
if __name__ == "__main__":
asyncio.run(main())
For the full scraped page text, read:
references/source.md