访问地址
http://localhost:7474/browser/
neo4j 123456
批量导入数据的方式
https://blog.csdn.net/mmmmmyyyy/article/details/107897217
学习函数
https://www.cnblogs.com/ljhdo/p/5516793.html
创建节点之间的关系
match (n:`预报节点`{STCD:'308A5901'}),(m:`预报节点`{STCD:'30806201'}) create (n)-[r:`下游预报节点`]->(m) return r
知识图谱查询路径
1、 https://blog.csdn.net/ai_1046067944/article/details/85342567?utm_medium=distribute.pc_relevant.none-task-blog-BlogCommendFromMachineLearnPai2-1.control&depth_1-utm_source=distribute.pc_relevant.none-task-blog-BlogCommendFromMachineLearnPai2-1.control
2、 https://www.cnblogs.com/ljhdo/p/5516793.html
从一个节点,通过直接关系,连接到另外一个节点,这个过程叫遍历,经过的节点和关系的组合叫做路径(Path),路径是由节点和关系的有序组合。
(a)-->(b):是步长为1的路径,节点a和b之间有关系直接关联;
(a)-->()-->(b):是步长为2的路径,从节点a,经过两个关系和一个节点,到达节点b;
Cypher语言支持变长路径的模式,变长路径的表示方式是:[*N..M],N和M表示路径长度的最小值和最大值
。
(a)-[*2]->(b):表示路径长度为2,起始节点是a,终止节点是b;
(a)-[*3..5]->(b):表示路径长度的最小值是3,最大值是5,起始节点是a,终止节点是b;
(a)-[*..5]->(b):表示路径长度的最大值是5,起始节点是a,终止节点是b;
(a)-[*3..]->(b):表示路径长度的最小值是3,起始节点是a,终止节点是b;
(a)-[*]->(b):表示不限制路径长度,起始节点是a,终止节点是b;
案例:
导入csv statoins:
USING PERIODIC COMMIT 300 LOAD CSV WITH HEADERS FROM 'file:///stations.csv' AS line
create (:stations {stcd:line.stcd,stnm:line.stnm,rvnm:line.rvnm,hnnm:line.hnnm,bsnm:line.bsnm,lgtd:line.lgtd,lttd:line.lttd,stlc:line.stlc,addvcd:line.addvcd,dtmnm:line.dtmnm,dtmel:line.dtmel,dtpr:line.dtpr,sttp:line.sttp,frgrd:line.frgrd,esstym:line.esstym,bgfrym:line.bgfrym,atcunit:line.atcunit,admauth:line.admauth,locality:line.locality,stbk:line.stbk,stazt:line.stazt,dstrvm:line.dstrvm,drna:line.drna,phcd:line.phcd,usfl:line.usfl,comments:line.comments,moditime:line.moditime})
蓄滞洪区
USING PERIODIC COMMIT 300 LOAD CSV WITH HEADERS FROM 'file:///xzhq.csv' AS line
create (:st_detention_basin {stcd:line.ennmcd,stnm:line.ennm,fid:line.fid,area:line.area,perimerter:line.perimerter,lgtd:line.lon,lttd:line.lat})
relation
LOAD CSV WITH HEADERS FROM "file:///relation3.csv" AS line
match (from:st_detention_basin {ennmcd:line.p1}),(to:stations{stcd:line.p2})
merge (from)-[r:下游预报节点2]->(to)
LOAD CSV WITH HEADERS FROM "file:///relation3.csv" AS line
match (from:st_detention_basin {stcd:line.p1}),(to:stations{stcd:line.p2})
merge (from)-[r:forecastPoint{name:"下游预报节点"}]->(to)
LOAD CSV WITH HEADERS FROM "file:///stpptn_forcastid.csv" AS line
match (from:stations{stcd:line.p1}),(to:stations{stcd:line.p2})
merge (from)-[r:forecastPoint{name:"预报关联站点",forcastid:line.forcastid}]->(to)
–关系
LOAD CSV WITH HEADERS FROM "file:///riverPointsRelation.csv" AS line
match (from:riverPoint{stcd:line.stcd1}),(to:riverPoint{stcd:line.stcd2})
merge (from)-[r:riverRelation{name:line.name}]->(to)
–站点
USING PERIODIC COMMIT 300 LOAD CSV WITH HEADERS FROM 'file:///riverPoints.csv' AS line
create (:riverPoint {stcd:line.stcd,stnm:line.stnm,rlength:line.rlength,ttofw:line.ttofw,rivertype:line.rivertype})
删除俩节点的关系
match (n1),(n2)
where n1.name="lisi" AND n2.name="王五"
optional match (n1)-[r]-(n2)
delete r
修改关系名称
match(n)-[r:`forecastPiont2`]->(m) create(n)-[r2:`forecastPiont`]->(m) set r2=r with r delete r
给已经创建好的关系添加属性 会覆盖之前创建的属性
MATCH p=()-[r:`forecastPiont`]->() SET r={name:"下游预报节点"} RETURN p
给已经创建的节点添加属性
MERGE (n:st_detention_basin) SET n.dd = 'FsCD'RETURN n
删除关系及其节点
MATCH (r)
WHERE id(r) = 2299
DETACH DELETE r
查询记录
MATCH (n:预报节点)--(b:流域区间) where n.STNM={stnm} RETURN properties(n) as n,properties(b) as b
match(n:`预报节点`{STCD:'30809301'})-[r*..2]-(m) return n,r,m
match(n:`预报节点`{STCD:'30809301'})-[r1]-(p)-[r2]-(m) return n,type(r1),p,type(r2),m
尾随一个excel表格整理数据好用公式
LOOKUP(1,0/($a$1:a200=b1),$c$1:c200)