ttl电路制作pong游戏_如何玩Mozilla Firefox的隐藏的独角兽Pong游戏

ttl电路制作pong游戏

ttl电路制作pong游戏

The Firefox logo.

It seems like every browser has a hidden game these days. Chrome has a dinosaur game, Edge has surfing, and Firefox has . . . unicorn pong? Yep, you read that right—here’s how to play it.

这些天似乎每个浏览器都有一个隐藏的游戏。 Chrome有恐龙游戏,Edge有冲浪游戏,Firefox有。 。 。 独角兽乒乓球? 是的,您没看错-这是播放它的方法。

First, open Firefox. Click the hamburger menu (the three horizontal lines) at the upper right, and then click “Customize.”

首先,打开Firefox。 单击右上角的汉堡菜单(三个水平线),然后单击“自定义”。

Click the hamburger menu in Firefox, and then click "Customize."

On the “Customize Firefox” tab, you’ll see a list of interface elements to configure the toolbar.

在“自定义Firefox”选项卡上,您将看到用于配置工具栏的界面元素的列表。

The "Customize Firefox" tab.

Click and drag all the toolbar items except “Flexible Space” into the “Overflow Menu” on the right.

单击并将所有工具栏项目(“弹性空间”除外)拖到右侧的“溢出菜单”中。

In Firefox, click and drag all the toolbar elements into the "Overflow Menu."

Click the Unicorn button that appears at the bottom of the window.

单击出现在窗口底部的独角兽按钮。

Click the Unicorn button.

A Pong-like game with a small unicorn icon will appear on the left side of the tab. In this version of the game, the “Flexible Space” box serves as the Pong paddle, and the unicorn icon is the ball.

选项卡左侧将出现带有小独角兽图标的类似Pong的游戏。 在此版本的游戏中,“弹性空间”框用作乒乓球拍,而独角兽图标是球。

To play, just use the arrow keys on your keyboard to position your paddle so the unicorn doesn’t move past it. As Atari Pong’s famous instructions said, “Avoid missing unicorn for high score” (or something like that).

要播放,只需使用键盘上的箭头键来定位桨,这样独角兽就不会越过它。 正如Atari Pong的著名指示所说:“避免为获得高分而错过独角兽”(或类似的东西)。

The unicorn pong game in Mozilla Firefox.

If you lose and want to play again, just double-click the Unicorn button.

如果您输了又想再次玩,只需双击独角兽按钮。

When you’re done playing, click “Restore Defaults” to quickly remove all the items from the “Overflow Menu.” Click “Done” to close the “Customize Firefox” tab.

播放完毕后,单击“恢复默认值”以快速从“溢出菜单”中删除所有项目。 单击“完成”以关闭“自定义Firefox”选项卡。

Now you can tell all your friends you’ve played Unicorn Pong. If they don’t believe you, just send them the link to this article.

现在,您可以告诉所有朋友您玩过独角兽乒乓球。 如果他们不相信您,只需将其链接发送给他们即可。

翻译自: https://www.howtogeek.com/688518/how-to-play-mozilla-firefoxs-hidden-unicorn-pong-game/

ttl电路制作pong游戏

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
好的,这是一个使用 PyTorch 实现 DQN 算法来 Pong 游戏的完整代码,你可以参考它: ```python import gym import torch import torch.nn as nn import torch.optim as optim import torch.nn.functional as F import random import numpy as np from collections import deque # 设置超参数 BATCH_SIZE = 32 GAMMA = 0.99 EPS_START = 1 EPS_END = 0.02 EPS_DECAY = 1000000 TARGET_UPDATE = 1000 MEMORY_CAPACITY = 100000 LR = 1e-4 ENV_NAME = "Pong-v0" # 设置环境 env = gym.make(ENV_NAME) n_actions = env.action_space.n # 设置设备 device = torch.device("cuda" if torch.cuda.is_available() else "cpu") # 定义神经网络 class DQN(nn.Module): def __init__(self): super(DQN, self).__init__() self.conv1 = nn.Conv2d(4, 32, kernel_size=8, stride=4) self.conv2 = nn.Conv2d(32, 64, kernel_size=4, stride=2) self.conv3 = nn.Conv2d(64, 64, kernel_size=3, stride=1) self.fc1 = nn.Linear(7 * 7 * 64, 512) self.fc2 = nn.Linear(512, n_actions) def forward(self, x): x = F.relu(self.conv1(x)) x = F.relu(self.conv2(x)) x = F.relu(self.conv3(x)) x = x.view(x.size(0), -1) x = F.relu(self.fc1(x)) x = self.fc2(x) return x # 定义经验回放类 class ReplayMemory(object): def __init__(self, capacity): self.capacity = capacity self.memory = deque(maxlen=capacity) def push(self, state, action, reward, next_state, done): self.memory.append((state, action, reward, next_state, done)) def sample(self, batch_size): batch = random.sample(self.memory, batch_size) state, action, reward, next_state, done = zip(*batch) return state, action, reward, next_state, done def __len__(self): return len(self.memory) # 定义 DQN 算法类 class DQNAgent(object): def __init__(self): self.policy_net = DQN().to(device) self.target_net = DQN().to(device) self.target_net.load_state_dict(self.policy_net.state_dict()) self.target_net.eval() self.optimizer = optim.Adam(self.policy_net.parameters(), lr=LR) self.memory = ReplayMemory(MEMORY_CAPACITY) self.steps_done = 0 self.episode_durations = [] self.episode_rewards = [] def select_action(self, state): sample = random.random() eps_threshold = EPS_END + (EPS_START - EPS_END) * \ np.exp(-1. * self.steps_done / EPS_DECAY) self.steps_done += 1 if sample > eps_threshold: with torch.no_grad(): state = torch.FloatTensor(state).unsqueeze(0).to(device) q_value = self.policy_net(state) action = q_value.max(1)[1].view(1, 1) else: action = torch.tensor([[random.randrange(n_actions)]], device=device, dtype=torch.long) return action def optimize_model(self): if len(self.memory) < BATCH_SIZE: return state, action, reward, next_state, done = self.memory.sample(BATCH_SIZE) state_batch = torch.FloatTensor(state).to(device) action_batch = torch.LongTensor(action).unsqueeze(1).to(device) reward_batch = torch.FloatTensor(reward).to(device) next_state_batch = torch.FloatTensor(next_state).to(device) done_batch = torch.FloatTensor(done).to(device) q_values = self.policy_net(state_batch).gather(1, action_batch) next_q_values = torch.zeros(BATCH_SIZE, device=device) next_q_values[~done_batch] = self.target_net(next_state_batch[~done_batch]).max(1)[0].detach() expected_q_values = (next_q_values * GAMMA) + reward_batch loss = F.smooth_l1_loss(q_values, expected_q_values.unsqueeze(1)) self.optimizer.zero_grad() loss.backward() self.optimizer.step() def train(self, num_episodes): for i_episode in range(num_episodes): state = env.reset() state = np.stack((state, state, state, state), axis=0) episode_reward = 0 for t in range(10000): action = agent.select_action(state) next_state, reward, done, _ = env.step(action.item()) episode_reward += reward next_state = np.append(np.expand_dims(next_state, 0), state[:3, :, :], axis=0) agent.memory.push(state, action.item(), reward, next_state, done) state = next_state agent.optimize_model() if done: agent.episode_durations.append(t + 1) agent.episode_rewards.append(episode_reward) if i_episode % 10 == 0: print("Episode: {}, Reward: {}".format(i_episode, episode_reward)) break if i_episode % TARGET_UPDATE == 0: agent.target_net.load_state_dict(agent.policy_net.state_dict()) env.close() if __name__ == "__main__": agent = DQNAgent() agent.train(1000) ``` 注意:这段代码需要安装 gym 和 PyTorch 库。在运行代码之前,请确保你已经按照这些库。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值