2.4600 Wrought nickel-base superalloy for price

2.4600 Wrought nickel-base superalloy

2.4600 alloy Equal material : NS3203, UNS N10675, Hastelloy B-3


1.2.4600 alloy main description:

2.4600 alloy is a kind of nickel molybdenum alloy , it is for any temperature and concentration of hydrochloric acid has excellent corrosion resistance. At the same time it to sulfuric acid, acetic acid, formic acid, phosphoric acid and other oxidizing medium also has good corrosion resistance.


2.2.4600 alloy Specification:

Forgings, bar, sheet, strip, ring parts, wire, bolt, spring, such as negotiating supply, according to the customer request production
3.2.4600 Application:

2.4600 alloy can be used in the previous 2 B is all purposes. As with B - 2 alloy, B - 3 is not recommended to trivalent iron salts and environment of the existence of bivalence copper salt, because these salt can quickly cause corrosion damage. When hydrochloric acid comes into contact with the iron and copper, will react to generate trivalent iron salts and bivalence copper salt.

2.4600 Scope of application areas :

2.4600 widely used in the chemical, petrochemical, energy production and has been widely used in pollution control areas, especially in the sulfuric acid, hydrochloric acid, phosphoric acid, acetic acid, and other industries.

Email  william(at)steel-jw(dot)com
Url www(dot)c267steel(dot)com
   www(dot)x80pipeline(dot)com
   www(dot)steel-jw(dot)com
Tel 86,371,63332179

来自 “ ITPUB博客 ” ,链接:http://blog.itpub.net/29531879/viewspace-1146597/,如需转载,请注明出处,否则将追究法律责任。

转载于:http://blog.itpub.net/29531879/viewspace-1146597/

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
以下是使用Python和Hugging Face Transformers库以及PyTorch构建智能问答模型的步骤: 1. 安装所需的库 在开始之前,请确保已经安装了以下库: - PyTorch - Hugging Face Transformers - numpy - pandas 2. 数据预处理 为了训练我们的模型,我们需要先准备一个数据集。在这个例子中,我们将使用SQuAD2.0数据集,这是一个流行的问答数据集。 SQuAD数据集包含大量的文本,因此我们需要将它们预处理为可以输入模型的格式。具体地说,在构建智能问答模型时,需要将每个数据点分成三个部分: - 输入文本段 - 问题 - 答案 我们可以使用pandas库来读取并处理SQuAD数据集中包含的JSON文件。以下是一个示例代码,它使用pandas将数据加载到进DataFrame中: ```python import pandas as pd import json # Load the data from the JSON file with open('squad.json') as f: data = json.load(f) # Convert the data to a DataFrame df = pd.DataFrame(data) ``` 在这里,我们将SQuAD数据集中的每个问题及其相应的答案转换为一个数据点。对于每个数据点,我们需要将文本及其相应的问题及答案分别存储在不同的变量中: ```python # Initialize empty lists to store the input text, questions and answers texts = [] questions = [] answers = [] # Loop over the rows in the DataFrame and extract the information we need for i, row in df.iterrows(): for qa in row['qas']: # Get the context text text = row['context'] # Get the question text question = qa['question'] # Get the answer text answer = qa['answers'][0]['text'] # Append the input text, question and answer to their respective lists texts.append(text) questions.append(question) answers.append(answer) ``` 3. 构建模型 接下来,我们需要构建我们的智能问答模型。在这个例子中,我们将使用Hugging Face Transformers库中的DistilBERT模型。 我们需要使用transformers库中的AutoTokenizer和AutoModelForQuestionAnswering类分别对输入进行标记化和模型训练。以下是示例代码: ```python from transformers import AutoTokenizer, AutoModelForQuestionAnswering # Load the DistilBERT tokenizer tokenizer = AutoTokenizer.from_pretrained('distilbert-base-uncased') # Load the DistilBERT model model = AutoModelForQuestionAnswering.from_pretrained('distilbert-base-uncased') ``` 4. 训练模型 我们已经准备好训练我们的智能问答模型了。在这个例子中,我们将使用PyTorch库实现训练过程。以下是一个简单的训练循环示例: ```python import torch # Set the device to run the model on device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') # Move the model to the device model.to(device) # Set the optimizer and loss function optimizer = torch.optim.Adam(model.parameters(), lr=5e-5) criterion = torch.nn.CrossEntropyLoss() # Set the batch size and number of epochs batch_size = 16 num_epochs = 3 # Loop over the training data for the specified number of epochs for epoch in range(num_epochs): # Loop over the batches in the training data for i in range(0, len(texts), batch_size): # Get a batch of input and target data batch_texts = texts[i:i+batch_size] batch_questions = questions[i:i+batch_size] batch_answers = answers[i:i+batch_size] # Tokenize the input data inputs = tokenizer(batch_texts, batch_questions, padding=True, truncation=True, max_length=512, return_tensors='pt') # Move the input data to the device for key in inputs: inputs[key] = inputs[key].to(device) # Get the start and end tokens for each answer start_tokens = [] end_tokens = [] for j in range(len(batch_answers)): answer_tokens = tokenizer(batch_answers[j], add_special_tokens=False)['input_ids'] context_tokens = inputs['input_ids'][j] start, end = find_answer_tokens(context_tokens, answer_tokens) start_tokens.append(start) end_tokens.append(end) # Convert the start and end tokens to PyTorch tensors start_tokens = torch.tensor(start_tokens).to(device) end_tokens = torch.tensor(end_tokens).to(device) # Zero the gradients optimizer.zero_grad() # Forward pass outputs = model(**inputs) # Calculate the loss start_loss = criterion(outputs.start_logits, start_tokens) end_loss = criterion(outputs.end_logits, end_tokens) loss = start_loss + end_loss # Backward pass loss.backward() # Update the model parameters optimizer.step() # Print the loss every 100 batches if i % 100 == 0: print(f'Epoch {epoch + 1}, Batch {i + 1}/{len(texts)}, Loss {loss.item():.4f}') ``` 5. 预测答案 最后,我们可以使用我们训练好的模型来预测给定的问题的答案。以下是一个示例代码: ```python # Set the example input text and question example_text = 'The Eiffel Tower is a wrought-iron lattice tower on the Champ de Mars in Paris, France. It is named after the engineer Gustave Eiffel, whose company designed and built the tower.' example_question = 'What is the Eiffel Tower named after?' # Tokenize the input text and question inputs = tokenizer(example_text, example_question, padding=True, truncation=True, max_length=512, return_tensors='pt') # Move the input data to the device for key in inputs: inputs[key] = inputs[key].to(device) # Forward pass outputs = model(**inputs) # Get the predicted start and end tokens for the answer start_token = torch.argmax(outputs.start_logits) end_token = torch.argmax(outputs.end_logits) # Decode the start and end tokens to get the answer text answer_ids = inputs['input_ids'][0][start_token:end_token+1] answer_tokens = tokenizer.convert_ids_to_tokens(answer_ids, skip_special_tokens=True) answer_text = tokenizer.convert_tokens_to_string(answer_tokens) ``` 以上是使用Python和Hugging Face Transformers库以及PyTorch构建智能问答模型的步骤。您可以使用自己的数据集和模型参数来训练您自己的模型。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值