#162 Set Matrix Zeroes

题目描述:

Given a m x n matrix, if an element is 0, set its entire row and column to 0. Do it in place.

Example

Given a matrix

[
  [1,2],
  [0,3]
],

return
[
[0,2],
[0,0]
]

Challenge 

Did you use extra space?
A straight forward solution using O(mn) space is probably a bad idea.
A simple improvement uses O(m + n) space, but still not the best solution.
Could you devise a constant space solution?

题目思路:

题目要求用constant space去解,那么就想到了把zero的信息写在matrix中。如果(i,j)的位置上位0,那么就意味着i行j列都为0。这里我会把(i,0)和(0, j)的位置标记为0,这表示i行都为0,j列都为0。需要注意的是(0,0)这个位置是分不清行和列的,所以我增加了两个bool:zero_row和zero_col,以此来区别(0,0)为0的时候到底是表示0行为0还是0列为0. 标记完了以后,就可以填0了。这里要注意的是,0行和0列要放在最后overwrite,不然有用的信息会被覆盖掉。

Mycode(AC = 44ms):

class Solution {
public:
    /**
     * @param matrix: A list of lists of integers
     * @return: Void
     */
    void setZeroes(vector<vector<int> > &matrix) {
        // write your code here
        int num_rows = matrix.size();
        if (num_rows == 0) return;
        
        int num_cols = matrix[0].size();
        
        // if (i, j) is zero, then use first row and
        // first column to record the zero
        bool zero_row = false, zero_col = false;
        for (int i = 0; i < num_rows; i++) {
            for (int j = 0; j < num_cols; j++) {
                if (matrix[i][j] == 0) {
                    matrix[0][j] = 0;
                    matrix[i][0] = 0;
                    zero_row = zero_row || (i == 0);
                    zero_col = zero_col || (j == 0);
                }
            }
        }
        
        // zero-out all the columns except first column
        for (int j = 1; j < num_cols; j++) {
            if (matrix[0][j] == 0) {
                for (int i = 1; i < num_rows; i++) {
                    matrix[i][j] = 0;
                }
            }
        }
        
        // zero-out all the rows except first row
        for (int i = 1; i < num_rows; i++) {
            if (matrix[i][0] == 0) {
                for (int j = 1; j < num_cols; j++) {
                    matrix[i][j] = 0;
                }
            }
        }
        
        // zero-out the first row if needed
        if (zero_row) {
            for (int i = 0; i < num_cols; i++) {
                matrix[0][i] = 0;
            }
        }
        
        // zero-out the first column if needed
        if (zero_col) {
            for (int i = 0; i < num_rows; i++) {
                matrix[i][0] = 0;
            }
        }
    }
};


  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
import numpy as np def sigmoid(x): # the sigmoid function return 1/(1+np.exp(-x)) class LogisticReg(object): def __init__(self, indim=1): # initialize the parameters with all zeros # w: shape of [d+1, 1] self.w = np.zeros((indim + 1, 1)) def set_param(self, weights, bias): # helper function to set the parameters # NOTE: you need to implement this to pass the autograde. # weights: vector of shape [d, ] # bias: scaler def get_param(self): # helper function to return the parameters # NOTE: you need to implement this to pass the autograde. # returns: # weights: vector of shape [d, ] # bias: scaler def compute_loss(self, X, t): # compute the loss # X: feature matrix of shape [N, d] # t: input label of shape [N, ] # NOTE: return the average of the log-likelihood, NOT the sum. # extend the input matrix # compute the loss and return the loss X_ext = np.concatenate((X, np.ones((X.shape[0], 1))), axis=1) # compute the log-likelihood def compute_grad(self, X, t): # X: feature matrix of shape [N, d] # grad: shape of [d, 1] # NOTE: return the average gradient, NOT the sum. def update(self, grad, lr=0.001): # update the weights # by the gradient descent rule def fit(self, X, t, lr=0.001, max_iters=1000, eps=1e-7): # implement the .fit() using the gradient descent method. # args: # X: input feature matrix of shape [N, d] # t: input label of shape [N, ] # lr: learning rate # max_iters: maximum number of iterations # eps: tolerance of the loss difference # TO NOTE: # extend the input features before fitting to it. # return the weight matrix of shape [indim+1, 1] def predict_prob(self, X): # implement the .predict_prob() using the parameters learned by .fit() # X: input feature matrix of shape [N, d] # NOTE: make sure you extend the feature matrix first, # the same way as what you did in .fit() method. # returns the prediction (likelihood) of shape [N, ] def predict(self, X, threshold=0.5): # implement the .predict() using the .predict_prob() method # X: input feature matrix of shape [N, d] # returns the prediction of shape [N, ], where each element is -1 or 1. # if the probability p>threshold, we determine t=1, otherwise t=-1
07-22

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值