Deep neural networks (DNN) have shown significant improvements in several application domains including computer vision and speech recognition. In computer vision, a particular type of DNN, known as Convolutional Neural Networks (CNN), have demonstrated state-of-the-art results in object recognition and detection.
Convolutional neural networks show reliable results on object recognition and detection that are useful in real world applications. Concurrent to the recent progress in recognition, interesting advancements have been happening in virtual reality (VR by Oculus), augmented reality (AR by HoloLens), and smart wearable devices. Putting these two pieces together, we argue that it is the right time to equip smart portable devices with the power of state-of-the-art recognition systems. However, CNN-based recognition systems need large amounts of memory and computational power. While they perform well on expensive, GPU-based machines, they are often unsuitable for smaller devices like cell phones and embedded electronics.
In order to simplify the networks, Professor Zhang tries to introduce simple, efficient, and accurate approximations to CNNs by binarizing the weights. Professor Zhang needs your help.
More specifically, you are given a weighted vector W=(w1,w2,...,wn) . Professor Zhang would like to find a binary vector B=(b1,b2,...,bn) (bi∈{+1,−1}) and a scaling factor α≥0 in such a manner that ∥W−αB∥2 is minimum.
Note that ∥⋅∥ denotes the Euclidean norm (i.e. ∥X∥=x21+⋯+x2n−−−−−−−−−−√ , where X=(x1,x2,...,xn) ).
Convolutional neural networks show reliable results on object recognition and detection that are useful in real world applications. Concurrent to the recent progress in recognition, interesting advancements have been happening in virtual reality (VR by Oculus), augmented reality (AR by HoloLens), and smart wearable devices. Putting these two pieces together, we argue that it is the right time to equip smart portable devices with the power of state-of-the-art recognition systems. However, CNN-based recognition systems need large amounts of memory and computational power. While they perform well on expensive, GPU-based machines, they are often unsuitable for smaller devices like cell phones and embedded electronics.
In order to simplify the networks, Professor Zhang tries to introduce simple, efficient, and accurate approximations to CNNs by binarizing the weights. Professor Zhang needs your help.
More specifically, you are given a weighted vector W=(w1,w2,...,wn) . Professor Zhang would like to find a binary vector B=(b1,b2,...,bn) (bi∈{+1,−1}) and a scaling factor α≥0 in such a manner that ∥W−αB∥2 is minimum.
Note that ∥⋅∥ denotes the Euclidean norm (i.e. ∥X∥=x21+⋯+x2n−−−−−−−−−−√ , where X=(x1,x2,...,xn) ).
Input
There are multiple test cases. The first line of input contains an integer
T
, indicating the number of test cases. For each test case:
The first line contains an integers n (1≤n≤100000) -- the length of the vector. The next line contains n integers: w1,w2,...,wn (−10000≤wi≤10000) .
The first line contains an integers n (1≤n≤100000) -- the length of the vector. The next line contains n integers: w1,w2,...,wn (−10000≤wi≤10000) .
Output
For each test case, output the minimum value of
∥W−αB∥2
as an irreducible fraction "
p
/
q
" where
p
,
q
are integers,
q>0
.
Sample Input
3 4 1 2 3 4 4 2 2 2 2 5 5 6 2 3 4
Sample Output
5/1 0/1 10/1
Author
zimpha
展开式子, ∥W−αB∥2=α2i=1∑nbi2−2αi=1∑nwibi+i=1∑nwi2.
由于bi∈{+1,−1}, 那么i=1∑nbi2=n, 显然c=i=1∑nwi2也是常数. 转化成求α2n−2αi=1∑nwibi+c的最小值. 对于固定的α>0, 只要i=1∑nwibi最大就好了. 显然bi=sign(wi)的时候, i=1∑nwibi=i=1∑n∣wi∣最大. 进一步的, 上面显然是一个关于α的二次方程, 于是当α=n1i=1∑nwibi=n1i=1∑n∣wi∣时, 取到最大值.
化简下, 可以得到最小值是∑i=1nwi2−n1(i=1∑n∣wi∣)2
#include<iostream>#include<cstdio>
using namespace std;
typedef long long ll;
ll a[100005];
ll gcd(ll a,ll b)
{
return b==0?a:gcd(b,a%b);
}
int main()
{
ll t,n;
scanf("%lld",&t);
while(t--)
{
scanf("%lld",&n);
ll s1=0,s2=0,sum=0;
for(int i=1;i<=n;i++)
{
scanf("%lld",&a[i]);
s1+=a[i]*a[i];
if(a[i]>=0)
sum+=a[i];
else
{
sum-=a[i];
}
}
s2=sum*sum;
ll t=gcd(n*s1-s2,n);
printf("%lld/%lld\n",(n*s1-s2)/t,n/t);
}
}