gcc c语言中指数函数,chliang

pull request commentopencv/opencv

@dmatveev @AsyaPronina Please look

aDanPin

chliang

comment created time in a minute

Pull Request Readiness Checklist

See details at https://github.com/opencv/opencv/wiki/How_to_contribute#making-a-good-pull-request

[ ] I agree to contribute to the project under Apache 2 License.

[ ] To the best of my knowledge, the proposed patch is not based on a code under GPL or other license that is incompatible with OpenCV

[ ] The PR is proposed to proper branch

[ ] There is reference to original bug report and related work

[ ] There is accuracy test, performance test and test data in opencv_extra repository, if applicable

Patch to opencv_extra has the same branch name.

[ ] The feature is well documented and sample code can be built with the project CMake

+44 -14

0

comment

5 changed

files

pr created time in 3 minutes

pull request commentopencv/opencv

@danielenricocahall Please take a look on failed builds (you may start from "debug" local build)

I think all issues are resolved - please feel free to re-review at earliest convenience.

danielenricocahall

chliang

comment created time in 38 minutes

GollumEvent

pull request commentopencv/opencv

Is it possible to make the code compatible with both old and new versions with help of preprocessor macros?

jiapei100

chliang

comment created time in an hour

System information (version)

OpenCV => 4.5.2 (git checkout 4.5.2)

Operating System / Platform => Ubuntu 18.04 LTS

Compiler => GNU 7.5.0

Detailed description

In 3rdparty/readme.txt, it said:

In order to use these versions of libraries instead of system ones on UNIX systems you should use BUILD_ CMake flags (for example, BUILD_PNG for the libpng library).

However, what I found is that: at least for libjpeg-turbo, even I do not use the option -DBUILD_JPEG=ON, it will still use the libjpeg-turbo under 3rdparty/libjpeg-turbp

Steps to reproduce

mkdir build

cd build

cmake ..

make

to help us to know whether the making system is using the system library or not, we can insert a small error to libjpeg-turbo

diff --git a/3rdparty/libjpeg-turbo/src/jpeglib.h b/3rdparty/libjpeg-turbo/src/jpeglib.h

index d7664f0630..3b511ab5d9 100644

--- a/3rdparty/libjpeg-turbo/src/jpeglib.h

+++ b/3rdparty/libjpeg-turbo/src/jpeglib.h

@@ -17,7 +17,7 @@

#ifndef JPEGLIB_H

#define JPEGLIB_H

-

+,

Then after we compile the code, we can see the error message caused by the above induced error, which indicates that opencv indeed used the version in '3rdparty' instead of the system library.

Issue submission checklist

[done] I report the issue, it's not a question

[done] I checked the problem with documentation, FAQ, open issues, forum.opencv.org, Stack Overflow, etc and have not found solution

[done] I updated to latest OpenCV version and the issue is still there

[done] There is reproducer code and related data files: videos, images, onnx, etc

closed time in an hour

yqtianust

chliang

issue commentopencv/opencv

Yes, probably, status output can be updated to work like for libpng and other similar libraries: https://github.com/opencv/opencv/blob/0f11b1fc0d329c7cd7bd425f006b6583c1f23489/CMakeLists.txt#L1298-L1300

Usually it link with dynamic libraries.

yqtianust

chliang

comment created time in an hour

GollumEvent

从贴的日志来看,是

https://github.com/PaddlePaddle/PaddleX/blob/e43a1977a025d5aec17de2abb75a74ac6f3552bd/deploy/lite/android/sdk/src/main/java/com/baidu/paddlex/preprocess/Transforms.java#L119

这行获取target_size的时候报错了,检查下配置文件中有没有这个字段。

之后genw根据错误地方,初始化了configParser,就变成了一直在加载模型

这里是说已经解决了配置文件错误的这个问题了吗?按理说FasterRCNN用的应该是ResizeByShort而不是Resize,但这个日志显示走到了Resize这个地方了。你的配置文件是不是还用的YOLO的呢?

还有就是之前我们测试用的是PaddleLite==2.6,如果你使用的是最新版本,需要回退下paddlelite的版本后再重新导出模型。

我刚刚查看了一下这个类的源码,并没有“padding.width = ((List) info.get("target_size")).get(0); ”这一行代码;配置文件原来的官方demo里就没有初始化,然后我用fasterrcnn后报错就自己初始化了一下,应该不是yolo的

a794133319

chliang

comment created time in an hour

GollumEvent

issue commentopencv/opencv

Passing test does not guarantee safety, usually thread-related problems can be reproduced only once in a multiple test runs or in certain conditions like busy system. This test and lock-mechanism are quite old (https://github.com/opencv/opencv/pull/164) and there is a chance these workarounds are not required anymore for modern FFmpeg versions. There are mentions that it uses internal locking now and can be considered thread-safe in most configurations (https://lists.ffmpeg.org/pipermail/libav-user/2014-August/007298.html).

Our FFmpeg plugin for Windows is built with threading support: https://github.com/opencv/opencv_3rdparty/blob/ead6edd2bea1e31275f8a4d756cd068f8034af9a/ffmpeg/make_mingw.sh#L119

I think we can try either to remove locks but only for newer FFmpeg versions, either provide an environment variable to control locking mechanism in runtime (enable it by default). Maybe @alalek can suggest something else.

cudawarped

chliang

comment created time in an hour

Pull request review commentopencv/opencv

struct TestWithParamsSpecific : public TestWithParamsBase::value`

as I look at it more and more, seems like __VA_ARGS__ are actually names of to-be-later-created-variables.

you can look at what DEFINE_SPECIFIC_PARAMS does to get the idea.

anyhow, if you figure out a nice way to simplify this, by all means feel free to update this code.

sivanov-work

chliang

comment created time in an hour

pull request commentd2l-ai/d2l-zh

Job d2l-zh/PR-830/1 is complete.

Check the results at http://preview.d2l.ai/d2l-zh/PR-830/

npudqsz

chliang

comment created time in 2 hours

+4 -4

0

comment

1 changed

file

pr created time in 2 hours

Pull request review commentopencv/opencv

struct TestWithParamsSpecific : public TestWithParamsBase::value`

it depends on whether VA_ARGS are variables or types

I think I understand your point: yes - my home-grown function implementation is for variables only, to cover the types we need another solution. But implementation of generic solution (both for types & variables) requires a lot of effort in decomposition sub-problems for types and variables in single overwhelmed entity.

but for this specific case we receives variables only (for checking fixture correctness), and looks like it is enough to work over __VA_ARGS__ as variables here

Sorry, if i understood you in a wrong way

sivanov-work

chliang

comment created time in 3 hours

As per comment https://github.com/opencv/opencv/pull/20010#commitcomment-51027021

Pull Request Readiness Checklist

See details at https://github.com/opencv/opencv/wiki/How_to_contribute#making-a-good-pull-request

[x] I agree to contribute to the project under Apache 2 License.

[x] To the best of my knowledge, the proposed patch is not based on a code under GPL or other license that is incompatible with OpenCV

[x] The PR is proposed to proper branch

[x] There is reference to original bug report and related work

[ ] There is accuracy test, performance test and test data in opencv_extra repository, if applicable

Patch to opencv_extra has the same branch name.

[ ] The feature is well documented and sample code can be built with the project CMake

+2 -2

0

comment

1 changed

file

pr created time in 3 hours

GollumEvent

Pull request review commentopencv/opencv

class GAPI_EXPORTS_W_SIMPLE GStreamingCompiled};/** @} */+namespace gapi {+/**+ * @brief Ask G-API to set specific queue capacity for streaming execution.+ *+ * For streaming execution G-API has a special concurrent bounded queue to+ * fetch next frame while current is being processed. This compilation argument+ * specifies the capacity of this queue.+ */+struct queue_capacity+{+ size_t capacity;

I don't know how to add default value for struct in c++11

TolyaTalamanov

chliang

comment created time in 3 hours

Pull request review commentopencv/opencv

cv::gimpl::GStreamingExecutor::GStreamingExecutor(std::unique_ptr<:graph> &&m_sink_queues .resize(proto.out_nhs.size(), nullptr);m_sink_sync .resize(proto.out_nhs.size(), -1);- // Very rough estimation to limit internal queue sizes.+ // Very rough estimation to limit internal queue sizes if not specified by the user.// Pipeline depth is equal to number of its (pipeline) steps.- const auto queue_capacity = 3*std::count_if- (m_gim.nodes().begin(),- m_gim.nodes().end(),- [&](ade::NodeHandle nh) {- return m_gim.metadata(nh).get().k == NodeKind::ISLAND;- });+ auto has_queue_capacity = cv::gapi::getCompileArg<:gapi::queue_capacity>(m_comp_args);+ const auto queue_capacity = has_queue_capacity ? has_queue_capacity->capacity :

Needed ?

TolyaTalamanov

chliang

comment created time in 3 hours

GollumEvent

Pull request review commentopencv/opencv

TEST_P(GAPI_Streaming, SmokeTest_VideoConstSource_NoHang)auto testc = cv::GComputation(cv::GIn(in, in2), cv::GOut(out)).compileStreaming(cv::GMatDesc{CV_8U,3,cv::Size{256,256}},cv::GMatDesc{CV_8U,3,cv::Size{768,576}},- cv::compile_args(cv::gapi::use_only{getKernelPackage()}));+ cv::compile_args(cv::gapi::use_only{getKernelPackage()},+ cv::gapi::queue_capacity{1}));

Done

TolyaTalamanov

chliang

comment created time in 3 hours

Pull request review commentopencv/opencv

class GAPI_EXPORTS_W_SIMPLE GStreamingCompiled};/** @} */+namespace gapi {+/**+ * @brief Ask G-API to set specific queue capacity for streaming execution.+ *+ * For streaming execution G-API has a special concurrent bounded queue to+ * fetch next frame while current is being processed. This compilation argument+ * specifies the capacity of this queue.+ */+struct queue_capacity

Done

TolyaTalamanov

chliang

comment created time in 3 hours

Pull request review commentopencv/opencv

struct TestWithParamsSpecific : public TestWithParamsBase::value`

in theory, the whole thing could be rewritten to only use templates (making macro a no-op), not sure why I didn't go this way. there was something about declaring member variables which couldn't be done with a simple template magic (maybe with a complicated magic, but it seemed too much at the time probably).

sivanov-work

chliang

comment created time in 3 hours

Pull request review commentopencv/opencv

struct TestWithParamsSpecific : public TestWithParamsBase::value`

it depends on whether VA_ARGS are variables or types (which is also a problem with std::make_tuple() I think). you'd need a type function in the latter case.

also note that the logic is to "##" the number to a macro, so that it becomes a new macro name e.g.:

Number = 0: use DEFINE_SPECIFIC_PARAMS_0

Number = 5: use DEFINE_SPECIFIC_PARAMS_5

if I remember this correctly.

so I don't think you'd be able to pull that off with a (possibly-a-type) function to be honest, since the compiler is invoked after the preprocessor.

anyhow, at least you'd need to wrap existing macro into one more to get the literal before using it, which is of course might also be problematic :)

sivanov-work

chliang

comment created time in 4 hours

FINAL CUT - Library for creating terminal applications with text-based widgets. [LGPL]

created time in 5 hours

pull request commentopencv/opencv

Fixed the warning under windows and rebased my branch on master to keep a linear history.

HattrickGenerator

chliang

comment created time in 5 hours

Pull request review commentopencv/opencv

template class Params {, 1u} {};- Params& cfgInputLayers(const typename PortCfg::In &ll) {+ /** @brief Sets sequence of CNN input layers names for inference.++ The function is used to set order of CNN input layers. This order will be+ associated to data that you provide to inference. Count of names has to match to+ number of CNN inputs. Name is set automatically (without calling this function)+ if CNN has one input but this doesn't prevent you from doing it yourself.++ @param layer_names array that contains names of CNN input layers.+ @return reference to object of class Params.+ */+ Params& cfgInputLayers(const typename PortCfg::In &layer_names) {desc.input_names.clear();- desc.input_names.reserve(ll.size());- std::copy(ll.begin(), ll.end(),+ desc.input_names.reserve(layer_names.size());+ std::copy(layer_names.begin(), layer_names.end(),std::back_inserter(desc.input_names));return *this;}- Params& cfgOutputLayers(const typename PortCfg::Out &ll) {+ /** @brief Sets sequence of CNN output layers names for inference.++ The function is used to set order of output layers. This order will be+ associated to data that you receive from inference. Name is set automatically+ (without calling this function) if CNN has one output but this doesn't prevent+ you from doing it yourself. Count of names has to match to number of CNN+ outputs.++ @param layer_names array that contains names of output layers.+ @return reference to object of class Params.+ */+ Params& cfgOutputLayers(const typename PortCfg::Out &layer_names) {desc.output_names.clear();- desc.output_names.reserve(ll.size());- std::copy(ll.begin(), ll.end(),+ desc.output_names.reserve(layer_names.size());+ std::copy(layer_names.begin(), layer_names.end(),std::back_inserter(desc.output_names));return *this;}+ /** @brief Sets constant input.

Sets -> Specifies a

mpashchenkov

chliang

comment created time in 8 hours

Pull request review commentopencv/opencv

template class Params {, 1u} {};- Params& cfgInputLayers(const typename PortCfg::In &ll) {+ /** @brief Sets sequence of CNN input layers names for inference.++ The function is used to set order of CNN input layers. This order will be+ associated to data that you provide to inference. Count of names has to match to+ number of CNN inputs. Name is set automatically (without calling this function)+ if CNN has one input but this doesn't prevent you from doing it yourself.++ @param layer_names array that contains names of CNN input layers.+ @return reference to object of class Params.+ */+ Params& cfgInputLayers(const typename PortCfg::In &layer_names) {desc.input_names.clear();- desc.input_names.reserve(ll.size());- std::copy(ll.begin(), ll.end(),+ desc.input_names.reserve(layer_names.size());+ std::copy(layer_names.begin(), layer_names.end(),std::back_inserter(desc.input_names));return *this;}- Params& cfgOutputLayers(const typename PortCfg::Out &ll) {+ /** @brief Sets sequence of CNN output layers names for inference.++ The function is used to set order of output layers. This order will be+ associated to data that you receive from inference. Name is set automatically+ (without calling this function) if CNN has one output but this doesn't prevent+ you from doing it yourself. Count of names has to match to number of CNN+ outputs.++ @param layer_names array that contains names of output layers.+ @return reference to object of class Params.+ */+ Params& cfgOutputLayers(const typename PortCfg::Out &layer_names) {desc.output_names.clear();- desc.output_names.reserve(ll.size());- std::copy(ll.begin(), ll.end(),+ desc.output_names.reserve(layer_names.size());+ std::copy(layer_names.begin(), layer_names.end(),std::back_inserter(desc.output_names));return *this;}+ /** @brief Sets constant input.++ The function is used to set constant input. This input has to be+ a prepared tensor since preprocessing is disabled for this case. You should+ provide data and name of CNN layer which will be associated with it.++ @param layer_name name of CNN layer.+ @param data cv::Mat that contains data which will be associated with CNN layer.+ @param hint type of input (IMAGE or TENSOR).+ @return reference to object of class Params.+ */Params& constInput(const std::string &layer_name,const cv::Mat &data,TraitAs hint = TraitAs::TENSOR) {desc.const_inputs[layer_name] = {data, hint};return *this;}- Params& pluginConfig(IEConfig&& cfg) {- desc.config = std::move(cfg);+ /** @brief Sets IE config.++ The function is used to set configuration for device.++ @param cfg map of pairs: (config parameter name, config parameter value).+ @return reference to object of class Params.+ */+ Params& pluginConfig(const IEConfig& cfg) {+ desc.config = cfg;return *this;}- Params& pluginConfig(const IEConfig& cfg) {- desc.config = cfg;+ /** @overload+ Function with rvalue parameter.

with an

mpashchenkov

chliang

comment created time in 7 hours

Pull request review commentopencv/opencv

struct PortCfg {, std::tuple_size::value >;};+/**+ * Contains description of inference parameters and kit of functions that+ * fill this parameters.+ */

Most of the above IE comments apply here as well

mpashchenkov

chliang

comment created time in 7 hours

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值