Cannot call this method while RecyclerView is computing a layout or scrolling android.support.v7.wid

当for循环遍历完成,立刻更新适配器adapter.notifyDataSetChanged();,出现了这个错java.lang.IllegalStateException: Cannot call this method while RecyclerView is computing a layout or scrolling android.support.v7.widget.RecyclerView
非法状态异常:当recyclerview正在计算布局或滚动条时,无法调用此方法。
解决方法:

new Handler().post(new Runnable() {
                @Override
                public void run() {
                    // 刷新操作
                    adapter.notifyDataSetChanged();
                }
            });

以上是在主线程中,如果在子线程中就会报java.lang.RuntimeException: Can’t create handler inside thread that has not called Looper.prepare(),其实就是如果在主线程中创建handler时,系统会自动创建Looper,但是在子线程中创建handler时,是不会自动创建Looper的,此时如果不手动创建Looper,系统就会崩溃。
在子线程中是这样:

new Thread(new Runnable() {
                @Override
                public void run() {
                    Looper.prepare();
                    Looper.loop();
                    new Handler().post(new Runnable() {
                        @Override
                        public void run() {
                            // 刷新操作
                            adapter.notifyDataSetChanged();
                        }
                    });
                }
            }).start();
  • 2
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
The `tape` refers to the GradientTape in TensorFlow, which is used to compute gradients of a computation with respect to its input variables. When a `Tensor` loss is passed to a training function in TensorFlow, the gradients of the loss with respect to the trainable variables in the model need to be computed using the GradientTape. Here is an example of how to use the GradientTape to compute gradients: ```python import tensorflow as tf # Define a simple model model = tf.keras.Sequential([ tf.keras.layers.Dense(10, input_shape=(1,), activation='relu'), tf.keras.layers.Dense(1) ]) # Define a loss function loss_fn = tf.keras.losses.MeanSquaredError() # Generate some dummy data x = tf.random.normal((10, 1)) y = tf.random.normal((10, 1)) # Define an optimizer optimizer = tf.keras.optimizers.SGD(learning_rate=1e-3) # Train the model for one step with tf.GradientTape() as tape: # Compute the predictions of the model predictions = model(x) # Compute the loss between the predictions and the true values loss = loss_fn(y, predictions) # Compute the gradients of the loss with respect to the trainable variables gradients = tape.gradient(loss, model.trainable_variables) # Apply the gradients to the variables using the optimizer optimizer.apply_gradients(zip(gradients, model.trainable_variables)) ``` In this example, we define a simple model with one input variable and one output variable. We generate some dummy data and define a mean squared error loss function. We also define a stochastic gradient descent optimizer with a learning rate of 1e-3. In the training loop, we use a GradientTape to record the operations performed by the model when computing the predictions and the loss. We then compute the gradients of the loss with respect to the trainable variables using the tape, and apply them to the variables using the optimizer.

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值