An Example of association query

................................. 

 String hql="select task,insL.InsName,insR.InsName,tra"
        +" from Institution as insL,Transition as tra,Institution as insR,Task as task"
        +" where task.id=tra.taskId"
        +" and insL.InsId=tra.foreNode"
        +" and insR.InsId=tra.aftNode"
        +(condition==null?(""):condition);
  List list=null;
    try{
   Query query=session.createQuery(hql);
   list=query.list();tx.commit();
  }
  catch(Exception e){

  tx.rollback();
   e.printStackTrace();
   list=null;
  }
  finally{
   session.close();
  }
  if(list!=null){
   for(int i=0;i<list.size();i++){
    TransitionDTO transitionDTO=new TransitionDTO();
    Object[] obs=(Object[])list.get(i);
    Task tmp_task=(Task)obs[0];
    transitionDTO.taskId=tmp_task.getId();
    transitionDTO.foreInsName=(String)obs[1];
    transitionDTO.aftInsName=(String)obs[2];
      Transition tmp_trs=(Transition)obs[3];
    transitionDTO.transitionId=tmp_trs.getId();

.....................

注意HQL语句中from后面为类名,"."号后面为属性名,必须和hbm.xml中的相一致,而不是和类名中的属性名相同(尽管很多时候他们是相同的,但HQL语句中的必须和hbm中相一致)

the above code works,while the hbm.xml files don't contain the <many-to-one>properties

Certainly! Here's an example of fuzzy association rule mining using the Python library scikit-fuzzy: ```python import numpy as np import pandas as pd import skfuzzy as fuzz from sklearn.preprocessing import LabelEncoder # Load the dataset dataset = pd.read_csv('your_dataset.csv') # Preprocess the dataset label_encoder = LabelEncoder() dataset['Item'] = label_encoder.fit_transform(dataset['Item']) # Fuzzify the dataset fuzzy_dataset = fuzz.interp_membership(dataset['Item'], np.arange(len(dataset['Item'])), dataset['Support'], np.max) # Generate frequent itemsets frequent_itemsets = fuzz.frequent_itemsets(fuzzy_dataset, min_support=0.3, min_confidence=0.6) # Generate fuzzy association rules fuzzy_rules = fuzz.generate_association_rules(frequent_itemsets, min_confidence=0.6) # Print the fuzzy association rules for rule in fuzzy_rules: print(rule) ``` In this example, we assume that you have a dataset stored in a CSV file. The dataset contains two columns: 'Item' and 'Support'. 'Item' represents the items or products, and 'Support' represents their support values. First, we preprocess the dataset by encoding the categorical 'Item' column using LabelEncoder. Next, we fuzzify the dataset using `interp_membership` function, which assigns degrees of membership to each item based on their support values. Then, we generate frequent itemsets using the `frequent_itemsets` function. We specify the minimum support threshold (min_support) to determine which itemsets are frequent. Finally, we generate fuzzy association rules using the `generate_association_rules` function. We specify the minimum confidence threshold (min_confidence) to filter out weak rules. The generated fuzzy association rules are then printed out for further analysis. Please note that you need to adjust the parameters and adapt this code to your specific dataset and requirements. Also, ensure that you have the scikit-fuzzy library installed (`pip install scikit-fuzzy`) before running this code.
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值