[size=x-large]一 服务器配置[/size]
[b]1)cpu[/b]
2颗 Intel(R) Xeon(R) CPU E5620@2.40GHz
processor : 15
vendor_id : GenuineIntel
cpu family : 6
model : 44
model name : Intel(R) Xeon(R) CPU E5620 @ 2.40GHz
stepping : 2
cpu MHz : 1596.000
cache size : 12288 KB
physical id : 1
siblings : 8
core id : 10
cpu cores : 4
apicid : 53
initial apicid : 53
fpu : yes
fpu_exception : yes
cpuid level : 11
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm dca sse4_1 sse4_2 popcnt aes lahf_lm arat epb dts tpr_shadow vnmi flexpriority ept vpid
bogomips : 4799.88
clflush size : 64
cache_alignment : 64
address sizes : 40 bits physical, 48 bits virtual
power management:
[b]2)内存[/b]
64G
free -g
total used free shared buffers cached
Mem: 62 28 33 0 0 16
-/+ buffers/cache: 11 51
Swap: 31 0 31
[b]3)硬盘[/b]
768G的SSD + 1TB的SATA磁盘
df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup00-lv_root
197G 10G 177G 6% /
tmpfs 32G 0 32G 0% /dev/shm
/dev/mapper/VolGroup00-lv_application
686G 6.6G 645G 2% /application
/dev/sda2 1008M 55M 902M 6% /boot
/dev/sda1 1022M 23M 1000M 3% /boot/efi
/dev/mapper/VolGroup01-lv_data0
916G 19G 851G 3% /data0
[size=x-large]二 环境[/size]
局域网做压力测试,利用
10.10.160.154作为客户端
10.10.160.155作为memcached的服务器端
memcached启动参数
memcached -d -m 8192m -p 11211 -P /tmp/memcached.pid -c 1024 -f 1.25 -n 80
思路:采用8G内存作为测试环境。增长因子1.25,chunk初始大小为80B。其中算上chunk的自身数据结构48B,总的起始大小为128B。参考新浪微博高并发的经验,链接数设置为1024。
[size=x-large]三 测试结果[/size]
客户端在128线程同时并发写入读取的时候性能最好
get稳定在[b][color=red]19.4W #/second[/color][/b]
打开注释:memcachedClient.get(key);
set稳定在[b][color=red]17.2W #/second[/color][/b]
打开注释:memcachedClient.set(key, 0, System.currentTimeMillis());
另外在4台客户端同时对服务器进行压力测试的时候,服务器处理能力可以提升到[color=red]50W #/second[/color]这基本上是单台单线程server的极限。
测试set的时候,服务器状态
[img]http://dl2.iteye.com/upload/attachment/0089/6556/2a93fb98-877b-3fc6-874e-ee7421d717ff.jpg[/img]
[size=x-large]四 测试代码[/size]
思路:开个128的线程池。每个线程无限循环像memcached服务器无限循环调用。key为随机生成String key = new Random().nextFloat() + "" + j,value采用当前的时间戳System.currentTimeMillis()。然后每个线程都有自己的原子计数器,另外启动一个线程,每秒钟计算各线程原子计数器的总和。同时输出总的平均值。
[size=x-large]五 其他[/size]
测试的log和代码,都放到了附件里,请大家一起讨论。
互相尊重研究成果,有疑问可以交流,谢谢。
[size=x-large]作者简介 [/size]
昵称:澳洲鸟,猫头哥
姓名:朴海林
QQ:85977328
MSN:6301655@163.com
转载请注明出处
[b]1)cpu[/b]
2颗 Intel(R) Xeon(R) CPU E5620@2.40GHz
processor : 15
vendor_id : GenuineIntel
cpu family : 6
model : 44
model name : Intel(R) Xeon(R) CPU E5620 @ 2.40GHz
stepping : 2
cpu MHz : 1596.000
cache size : 12288 KB
physical id : 1
siblings : 8
core id : 10
cpu cores : 4
apicid : 53
initial apicid : 53
fpu : yes
fpu_exception : yes
cpuid level : 11
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm dca sse4_1 sse4_2 popcnt aes lahf_lm arat epb dts tpr_shadow vnmi flexpriority ept vpid
bogomips : 4799.88
clflush size : 64
cache_alignment : 64
address sizes : 40 bits physical, 48 bits virtual
power management:
[b]2)内存[/b]
64G
free -g
total used free shared buffers cached
Mem: 62 28 33 0 0 16
-/+ buffers/cache: 11 51
Swap: 31 0 31
[b]3)硬盘[/b]
768G的SSD + 1TB的SATA磁盘
df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup00-lv_root
197G 10G 177G 6% /
tmpfs 32G 0 32G 0% /dev/shm
/dev/mapper/VolGroup00-lv_application
686G 6.6G 645G 2% /application
/dev/sda2 1008M 55M 902M 6% /boot
/dev/sda1 1022M 23M 1000M 3% /boot/efi
/dev/mapper/VolGroup01-lv_data0
916G 19G 851G 3% /data0
[size=x-large]二 环境[/size]
局域网做压力测试,利用
10.10.160.154作为客户端
10.10.160.155作为memcached的服务器端
memcached启动参数
memcached -d -m 8192m -p 11211 -P /tmp/memcached.pid -c 1024 -f 1.25 -n 80
思路:采用8G内存作为测试环境。增长因子1.25,chunk初始大小为80B。其中算上chunk的自身数据结构48B,总的起始大小为128B。参考新浪微博高并发的经验,链接数设置为1024。
[size=x-large]三 测试结果[/size]
客户端在128线程同时并发写入读取的时候性能最好
get稳定在[b][color=red]19.4W #/second[/color][/b]
打开注释:memcachedClient.get(key);
set稳定在[b][color=red]17.2W #/second[/color][/b]
打开注释:memcachedClient.set(key, 0, System.currentTimeMillis());
另外在4台客户端同时对服务器进行压力测试的时候,服务器处理能力可以提升到[color=red]50W #/second[/color]这基本上是单台单线程server的极限。
测试set的时候,服务器状态
[img]http://dl2.iteye.com/upload/attachment/0089/6556/2a93fb98-877b-3fc6-874e-ee7421d717ff.jpg[/img]
[size=x-large]四 测试代码[/size]
思路:开个128的线程池。每个线程无限循环像memcached服务器无限循环调用。key为随机生成String key = new Random().nextFloat() + "" + j,value采用当前的时间戳System.currentTimeMillis()。然后每个线程都有自己的原子计数器,另外启动一个线程,每秒钟计算各线程原子计数器的总和。同时输出总的平均值。
package com.panguso.phl;
import java.util.ArrayList;
import java.util.List;
import java.util.Random;
import java.util.concurrent.ArrayBlockingQueue;
import java.util.concurrent.BlockingQueue;
import java.util.concurrent.Executors;
import java.util.concurrent.RejectedExecutionHandler;
import java.util.concurrent.ThreadFactory;
import java.util.concurrent.ThreadPoolExecutor;
import java.util.concurrent.ThreadPoolExecutor.AbortPolicy;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicInteger;
import java.util.concurrent.atomic.AtomicLong;
import net.rubyeye.xmemcached.MemcachedClient;
import net.rubyeye.xmemcached.MemcachedClientBuilder;
import net.rubyeye.xmemcached.XMemcachedClientBuilder;
import net.rubyeye.xmemcached.command.BinaryCommandFactory;
import net.rubyeye.xmemcached.utils.AddrUtil;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
public class PerformanceTest {
private static Logger logger = LoggerFactory.getLogger(PerformanceTest.class);
private static int corePoolSize = 128;
private static int maximumPoolSize = 128;
private static long keepAliveTime = 0;
private static TimeUnit unit = TimeUnit.NANOSECONDS;
private static BlockingQueue<Runnable> workQueue = new ArrayBlockingQueue<Runnable>(1024);
private static ThreadFactory threadFactory = Executors.defaultThreadFactory();
/**
* AbortPolicy 如果总线成熟超过maximumPoolSize + workQueue ,则跑异常java.util.concurrent.RejectedExecutionException
*/
private static RejectedExecutionHandler handler = new AbortPolicy();
//ExecutorService 为线程池的接口
private static ThreadPoolExecutor executor = new ThreadPoolExecutor(corePoolSize, maximumPoolSize, keepAliveTime, unit, workQueue, threadFactory, handler);
private static int concurrent = 32;
private static long size = Long.MAX_VALUE;
private static List<AtomicLong> counts = new ArrayList<AtomicLong>();
private static AtomicInteger count = new AtomicInteger(0);
private static long sum = 0;
private static AtomicInteger sumCount = new AtomicInteger(1);
public static void main(String[] args) throws Exception {
if (args.length > 0) {
concurrent = Integer.valueOf(args[0]);
size = Integer.valueOf(args[1]);
}
for (int i = 0; i < concurrent; i++) {
counts.add(new AtomicLong(0));
}
logger.info("concurrent=" + concurrent);
logger.info("size per thread=" + size);
final MemcachedClientBuilder builder = new XMemcachedClientBuilder(AddrUtil.getAddresses("10.10.160.155:11211"));
builder.setCommandFactory(new BinaryCommandFactory());
final MemcachedClient memcachedClient = builder.build();
for (int i = 0; i < concurrent; i++) {
executor.execute(new Runnable() {
@Override
public void run() {
AtomicLong current = counts.get(count.getAndIncrement());
for (long j = 0; j < size; j++) {
String key = new Random().nextFloat() + "" + j;
try {
memcachedClient.set(key, 0, System.currentTimeMillis());
// memcachedClient.get(key);
} catch (Throwable e) {
logger.error(e.getMessage(), e);
}
current.incrementAndGet();
}
}
});
}
while (true) {
Thread.sleep(1000);
long tmp = 0;
for (int i = 0; i < counts.size(); i++) {
tmp += counts.get(i).getAndSet(0);
}
sum += tmp;
logger.info("count=" + tmp + ",average=" + sum / sumCount.getAndIncrement());
logger.info("count=" + tmp);
}
// memcachedClient.shutdown();
}
}
[size=x-large]五 其他[/size]
测试的log和代码,都放到了附件里,请大家一起讨论。
互相尊重研究成果,有疑问可以交流,谢谢。
[size=x-large]作者简介 [/size]
昵称:澳洲鸟,猫头哥
姓名:朴海林
QQ:85977328
MSN:6301655@163.com
转载请注明出处