Visualize the Semantica knowledge graph — topology, centrality, communities, paths, embeddings, decision insights, and temporal evolution. Uses GraphAnalyzer, CentralityCalculator, CommunityDetector, PathFinder, and ContextGraph analytics. Sub-commands: topology, centrality, community, path, decision-graph, insights, temporal, embedding.
Render graph visualizations as Mermaid, ASCII, or structured Markdown. Usage: /semantica:visualize <sub-command> [args]
$ARGUMENTS = sub-command + optional node label or filter.
topology [--filter <node_type>]Full graph structure analysis — node types, edge distribution, connectivity metrics.
from semantica.kg.graph_analyzer import GraphAnalyzer
from semantica.context import ContextGraph
graph = ContextGraph(advanced_analytics=True)
analyzer = GraphAnalyzer()
# Comprehensive analysis
analysis = analyzer.analyze_graph(graph=graph.to_dict())
metrics = analyzer.compute_metrics(graph=graph)
connectivity = analyzer.analyze_connectivity(graph=graph)
Output:
Graph Topology:
Nodes: N (M types)
Edges: P
Density: 0.23
Avg degree: 4.7
Connected: YES / NO (K components)
Node type distribution:
[Mermaid pie chart]
| Type | Count | % | Avg Degree |
Top-10 connected nodes:
| Node | Type | Degree | Betweenness |
centrality [--type degree|betweenness|closeness|eigenvector|pagerank|all] [--top N]Calculate and rank nodes by centrality.
from semantica.kg.centrality_calculator import CentralityCalculator
from semantica.context import ContextGraph
graph = ContextGraph()
calc = CentralityCalculator()
if centrality_type == "all" or not centrality_type:
scores = calc.calculate_all_centrality(graph=graph)
elif centrality_type == "degree":
scores = calc.calculate_degree_centrality(graph=graph)
elif centrality_type == "betweenness":
scores = calc.calculate_betweenness_centrality(graph=graph)
elif centrality_type == "closeness":
scores = calc.calculate_closeness_centrality(graph=graph)
elif centrality_type == "eigenvector":
scores = calc.calculate_eigenvector_centrality(graph=graph)
elif centrality_type == "pagerank":
scores = calc.calculate_pagerank(
graph=graph,
max_iterations=20,
damping_factor=0.85,
)
Return: | Rank | Node | Type | Degree | Betweenness | Closeness | Eigenvector | PageRank |
For a single node, also call ContextGraph.get_node_centrality(node_id) and get_node_importance(node_id).
community [--algorithm louvain|leiden|label-propagation|overlapping]Detect and visualize graph communities/clusters.
from semantica.kg.community_detector import CommunityDetector
from semantica.context import ContextGraph
graph = ContextGraph()
detector = CommunityDetector()
algorithm = algo_arg or "louvain"
if algorithm == "louvain":
result = detector.detect_communities_louvain(graph, resolution=1.0)
elif algorithm == "leiden":
result = detector.detect_communities_leiden(graph, resolution=1.0)
elif algorithm == "label-propagation":
result = detector.detect_communities_label_propagation(graph)
elif algorithm == "overlapping":
result = detector.detect_overlapping_communities(graph)