Page_Load函数执行后界面没有显示执行结果

 <%@ Page language="c#" Codebehind="WebForm1.aspx.cs" AutoEventWireup="false" Inherits="WebFormApplication.WebForm1" %>
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" >
<HTML>
 <HEAD>
  <title>WebForm1</title>
  <meta name="GENERATOR" Content="Microsoft Visual Studio .NET 7.1">
  <meta name="CODE_LANGUAGE" Content="C#">
  <meta name="vs_defaultClientScript" content="JavaScript">
  <meta name="vs_targetSchema" content="http://schemas.microsoft.com/intellisense/ie5">
  <script language="C#" runat="server">
   void Page_Load()
   {
    Message.Text = "Welcome to ASP+";
   }
  </script>
 </HEAD>
 <body MS_POSITIONING="GridLayout">
  <FONT face="宋体">
   <form id="Form1" method="post" runat="server">
    <% for(int i=0;i<8;i++)
   {%>
    Hello World
    <% } %>
    <%Response.Write("Hello 123456");%>
    <asp:Label ID="Message" Runat="server" />
    <%Page_Load();%>
   </form>
  </FONT>
 </body>
</HTML>

*Page_Load函数执行后界面没有显示执行结果,本来界面上应该出现

Welcome to ASP+ 这句话的
 

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
可以按照以下步骤来实现: 1. 安装OpenCV和深度学习库,如TensorFlow或PyTorch。可以使用pip命令来安装。 2. 创建一个Python脚本,在其中导入所需的库和模块,如OpenCV、TensorFlow或PyTorch、Flask和NumPy。 3. 使用OpenCV库创建一个视频流对象,以便从摄像头捕捉视频流。 4. 加载深度学习模型,并使用该模型对每一帧图像进行目标检测。可以使用OpenCV的cv2.dnn模块来实现。 5. 将检测结果绘制在图像上,并将它们传递给Flask Web应用程序。 6. 在Flask应用程序中创建一个路由,以便将检测结果呈现在Web界面上。 7. 在网页上使用JavaScript或其他Web技术来呈现检测结果。 下面是一个简单的代码示例,可以实现将目标检测结果呈现在Web界面上: ```python import cv2 import numpy as np from flask import Flask, render_template, Response app = Flask(__name__) # Load the deep learning model model = cv2.dnn.readNet('model.pb') # Define the classes classes = ['class1', 'class2', 'class3'] # Create a video capture object cap = cv2.VideoCapture(0) # Define the function to detect objects in the video stream def detect_objects(frame): # Create a blob from the frame blob = cv2.dnn.blobFromImage(frame, 1 / 255.0, (416, 416), swapRB=True, crop=False) # Set the input to the model model.setInput(blob) # Make a forward pass through the model output = model.forward() # Get the dimensions of the frame (H, W) = frame.shape[:2] # Define the lists to store the detected objects boxes = [] confidences = [] classIDs = [] # Loop over each output layer for i in range(len(output)): # Loop over each detection in the output layer for detection in output[i]: # Extract the confidence and class ID scores = detection[5:] classID = np.argmax(scores) confidence = scores[classID] # Filter out weak detections if confidence > 0.5: # Scale the bounding box coordinates box = detection[0:4] * np.array([W, H, W, H]) (centerX, centerY, width, height) = box.astype('int') # Calculate the top-left corner of the bounding box x = int(centerX - (width / 2)) y = int(centerY - (height / 2)) # Add the bounding box coordinates, confidence and class ID to the lists boxes.append([x, y, int(width), int(height)]) confidences.append(float(confidence)) classIDs.append(classID) # Apply non-maximum suppression to eliminate overlapping bounding boxes indices = cv2.dnn.NMSBoxes(boxes, confidences, 0.5, 0.3) # Loop over the selected bounding boxes for i in indices: i = i[0] box = boxes[i] (x, y, w, h) = box # Draw the bounding box and label on the frame cv2.rectangle(frame, (x, y), (x + w, y + h), (0, 255, 0), 2) text = f'{classes[classIDs[i]]}: {confidences[i]:.2f}' cv2.putText(frame, text, (x, y - 5), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 255, 0), 2) return frame # Define the function to generate the video stream def generate(): while True: # Read a frame from the video stream ret, frame = cap.read() # Detect objects in the frame frame = detect_objects(frame) # Convert the frame to a JPEG image ret, jpeg = cv2.imencode('.jpg', frame) # Yield the JPEG image as a byte string yield (b'--frame\r\n' b'Content-Type: image/jpeg\r\n\r\n' + jpeg.tobytes() + b'\r\n') # Define the route to the video stream @app.route('/video_feed') def video_feed(): return Response(generate(), mimetype='multipart/x-mixed-replace; boundary=frame') # Define the route to the home page @app.route('/') def index(): return render_template('index.html') if __name__ == '__main__': # Start the Flask application app.run(debug=True) ``` 在上述代码中,我们使用了OpenCV的cv2.dnn模块来加载深度学习模型,并使用该模型对每一帧图像进行目标检测。我们还使用了Flask Web应用程序来呈现检测结果。在路由'/video_feed'中,我们使用了generate函数来生成视频流,并将每一帧图像作为JPEG图像传递给Web界面。在路由'/'中,我们使用了render_template函数来呈现HTML模板,以呈现检测结果。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值