无心剑中译莎士比亚《爱如星辰引迷舟》

在这里插入图片描述

莎士比亚十四行诗第116首

Sonnet 116 爱如星辰引迷舟

Let me not to the marriage of true minds
Admit impediments. Love is not love
Which alters when it alteration finds,
Or bends with the remover to remove:
O, no! it is an ever-fixed mark
That looks on tempests and is never shaken;
It is the star to every wandering bark,
Whose worth’s unknown, although his height be taken.
Love’s not Time’s fool, though rosy lips and cheeks
Within his bending sickle’s compass come:
Love alters not with his brief hours and weeks,
But bears it out even to the edge of doom.
If this be error and upon me proved,
I never writ, nor no man ever loved.

我不承认真心的姻缘
会有障碍,这种爱不算
发现风吹草动就心志不坚
或发现人家见异思迁就完蛋
哦,不!爱是永恒的航灯
凝望暴风雨却丝毫不动
爱犹如星辰指引迷舟航程
高度可测,价值却无穷
爱不屈从时光,朱唇红颜
却终不免遭受无常摧残
爱绝不会随岁月日渐暗淡
直到末日也永葆新鲜
若这话不对,或证明我错
我没写过,亦无人爱过

译于2022年11月29日

这首莎士比亚的十四行诗第116首,是一首赞美真挚爱情的颂歌,它通过鲜明的意象和生动的比喻,表达了诗人对于永恒爱情的坚定信念。无心剑的译文很好地传达了原诗的意境和情感。

首先,译者在翻译过程中保持了原诗的韵律和节奏,使得译诗在形式上与原诗相呼应。同时,译者在用词上也力求准确,使得译诗在意义上与原诗保持一致。

在译文的表达上,译者运用了生动的中文语言,将原诗的意象和情感准确地呈现出来。例如,“我不承认真心的姻缘/会有障碍,这种爱不算”这句话,译者通过“不承认真心的姻缘/会有障碍”这一表达,准确地传达了原诗中“Let me not to the marriage of true minds/Admit impediments”的意思,同时“这种爱不算”也简洁明了地表达了诗人对于非真挚爱情的否定态度。

此外,译者还通过一些巧妙的处理,使得译诗更加符合中文的表达习惯。例如,“爱犹如星辰指引迷舟航程/高度可测,价值却无穷”这句话,译者将“star”翻译为“星辰”,既保留了原诗的意象,又使得译诗更加具有中文诗歌的特色。同时,“高度可测,价值却无穷”这一表达也准确地传达了原诗中“Whose worth’s unknown, although his height be taken”的意思。

在整体上,这首译诗很好地传达了原诗的主题和情感,即对于真挚爱情的赞美和坚定信念。同时,译者的翻译技巧和表达能力也值得赞赏。不过,由于诗歌翻译本身具有一定的主观性和多样性,读者在阅读时也可以根据自己的理解和感受进行品味和赏析。

总之,无心剑对于莎士比亚十四行诗第116首的翻译是一首优秀的中文译诗,它准确地传达了原诗的意境和情感,同时也融入了中文诗歌的特色和韵味。

  • 2
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
下面是使用Python中的TensorFlow生成更加优美的莎士比亚风格诗句的示例代码: ```python import tensorflow as tf import numpy as np # 定义模型参数 num_epochs = 50 batch_size = 64 rnn_size = 256 num_layers = 2 learning_rate = 0.01 keep_prob = 0.5 # 读取数据 with open('shakespeare.txt', 'r') as f: text = f.read() # 构建字符映射表 vocab = set(text) vocab_to_int = {c: i for i, c in enumerate(vocab)} int_to_vocab = {i: c for i, c in enumerate(vocab)} encoded = np.array([vocab_to_int[c] for c in text], dtype=np.int32) # 构建输入数据和标签 seq_length = 100 num_seqs = len(encoded) // seq_length inputs = np.zeros((num_seqs, seq_length), dtype=np.int32) labels = np.zeros((num_seqs, seq_length), dtype=np.int32) for i in range(num_seqs): inputs[i] = encoded[i * seq_length:(i + 1) * seq_length] labels[i] = encoded[i * seq_length + 1:(i + 1) * seq_length + 1] # 构建模型 inputs_placeholder = tf.placeholder(tf.int32, [None, None], name='inputs') labels_placeholder = tf.placeholder(tf.int32, [None, None], name='labels') keep_prob_placeholder = tf.placeholder(tf.float32, name='keep_prob') embedding_size = 128 rnn_inputs = tf.contrib.layers.embed_sequence(inputs_placeholder, len(vocab), embedding_size) cell = tf.contrib.rnn.BasicLSTMCell(rnn_size) drop = tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob=keep_prob_placeholder) stacked_rnn = tf.contrib.rnn.MultiRNNCell([drop] * num_layers) initial_state = stacked_rnn.zero_state(batch_size, tf.float32) outputs, final_state = tf.nn.dynamic_rnn(stacked_rnn, rnn_inputs, initial_state=initial_state) logits = tf.contrib.layers.fully_connected(outputs, len(vocab), activation_fn=None) # 定义损失函数和优化器 loss = tf.contrib.seq2seq.sequence_loss( logits, labels_placeholder, tf.ones([batch_size, seq_length], dtype=tf.float32), average_across_timesteps=False, average_across_batch=True ) optimizer = tf.train.AdamOptimizer(learning_rate) train_op = optimizer.minimize(loss) # 训练模型 with tf.Session() as sess: sess.run(tf.global_variables_initializer()) for epoch in range(num_epochs): state = sess.run(initial_state) for i in range(num_batches): x = inputs[i * batch_size:(i + 1) * batch_size] y = labels[i * batch_size:(i + 1) * batch_size] feed = {inputs_placeholder: x, labels_placeholder: y, initial_state: state, keep_prob_placeholder: keep_prob} batch_loss, state, _ = sess.run([loss, final_state, train_op], feed_dict=feed) print('Epoch {}/{}...'.format(epoch + 1, num_epochs), 'Batch Loss: {:.4f}'.format(batch_loss)) # 生成新的文本 gen_length = 500 prime_words = 'To be or not to be:' gen_sentences = prime_words prev_state = sess.run(initial_state, feed_dict={batch_size: 1}) for word in prime_words.split(): x = np.zeros((1, 1)) x[0, 0] = vocab_to_int[word] feed = {inputs_placeholder: x, initial_state: prev_state, keep_prob_placeholder: 1.0} prev_state = sess.run(final_state, feed_dict=feed) for i in range(gen_length): feed = {inputs_placeholder: x, initial_state: prev_state, keep_prob_placeholder: 1.0} preds, prev_state = sess.run([probs, final_state], feed_dict=feed) pred = preds[0] next_index = np.random.choice(len(pred), p=pred) next_char = int_to_vocab[next_index] gen_sentences += next_char x = np.zeros((1, 1)) x[0, 0] = next_index print(gen_sentences) ``` 这段代码中,我们首先读取了莎士比亚的诗歌作为训练数据,并构建了字符映射表。然后,我们使用TensorFlow搭建了一个LSTM模型,并对模型进行了训练。最后,我们使用训练好的模型生成了新的莎士比亚风格的诗句。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

howard2005

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值