iPhone开发重构:从硬编码到模型到规律

无论在iPhone开发还是学习的过程中都会看到一些不是很理想的代码,不可否认自己也在不断“贡献”着这类代码。面对一些代码的“坏味道”,重构显然是个有效的解决途径。《iPhone开发重构》系列就想总结和补充iPhone开发中经历的一些重构,其间可能会引用一些开源以及实际项目的代码,本着对技术的探求,冒昧之处还请作者多多见谅。

 

    在iPhone开发的过程中经常会遇到根据不同的Table行或者标识符推入不同的Controller的需要,一个最直接的实现就是硬编码,通过if…else if…else或者switch…case…实现,代码如下:

 

 

重构前:
- (void)pushViewControllerWithIdentifier:(NSString *)identifier animated:(BOOL)animated {
    if ([identifier isEqualToString:@"Article"]) {
        viewController = [[WHArticleViewController alloc] initWithIdentifier:identifier];
    } else if ([identifier isEqualToString:@"SurvivalKit"]) {
        viewController = [[WHSurvivalKitViewController alloc] init];
    } else if ([identifier isEqualToString:@"Search"]) {
        viewController = [[WHSearchViewController alloc] initWithIdentifier:identifier];
    } else if ([identifier isEqualToString:@"Featured"]) {
        viewController = [[WHFeaturedViewController alloc] initWithIdentifier:identifier];
    } else if ([identifier isEqualToString:@"Image"]) {
        viewController = [[WHImageViewController alloc] initWithIdentifier:identifier];
    } else if ([identifier isEqualToString:@"Settings"]) {
        viewController = [[WHSettingsViewController alloc] init];
    }
    [self pushViewController:viewController animated:animated];
    [viewController release];
}

    其中的“坏味道”就是过多的条件分支判断以及在各个分支中雷同的代码。在需要扩展的时候就要被迫修改代码,从而破坏了开放封闭原则。总结下规律后觉得应该构建一个从identifier到对应的Controller名之间映射的表或者字典模型。实现的时候通过setupModel构建映射表,然后具体使用的时候利用表进行映射,重构代码如下:

 

重构一:

- (void) setupModel {
    if(self.model == nil) {
        self.model = [NSDictionary dictionaryWithObjectsAndKeys:@"WHArticleViewController", @"Article",
                      @"WHSurvivalKitViewController", @"SurvivalKit",
                      @"WHSurvivalKitViewController", @"Search",
                      @"WHFeaturedViewController", @"Featured",
                      @"WHImageViewController", @"Image",
                      @"WHSettingsViewController", @"Settings",            
                      nil];
    }
}

- (void)pushViewControllerWithIdentifier:(NSString *)identifier animated:(BOOL)animated {
    NSString *identifierNamespace = [identifier identifierNamespace];
    NSString *controllerClassName = [self.model objectForKey:identifierNamespace];
    Class klass = NSClassFromString(controllerClassName);
    UIViewController *viewController = [[klass alloc] initWithIdentifier:identifier];
    [self pushViewController:viewController animated:animated];
    [viewController release];
}

  重构后,不仅避免了过多的条件分支而且清理了各个分支中的雷同代码。不足就是表模型数据还是夹杂在代码中,因此希望通过plist文件隐藏这些和代码无关的数据。重构代码如下:

 

重构二:

- (void) setupModel{
    if(self.model == nil) {
        NSString *plistFilePath = [[NSBundle mainBundle] pathForResource:@"Model" ofType:@"plist"];
        self.model = [NSDictionary dictionaryWithContentsOfFile:plistFilePath];
    }
}

- (void)pushViewControllerWithIdentifier:(NSString *)identifier animated:(BOOL)animated {
    //实现同“重构一”
}
 

   现在,一切看上去还挺美的。不过可能一段时间后会发现,原来indentifier和Controller名之间的映射是有规律的,所以可能根本不需要映射表。尝试一下吧!

 

构三:

- (void)pushViewControllerWithIdentifier:(NSString *)identifier animated:(BOOL)animated {
    NSString *identifierNamespace = [identifier identifierNamespace];
    NSString *controllerClassName = [NSString stringWithFormat:@"WH%@ViewController", identifierNamespace];
    Class klass = NSClassFromString(controllerClassName);
    UIViewController *viewController = [[klass alloc] initWithIdentifier:identifier];
    [self pushViewController:viewController animated:animated];
    [viewController release];
}

 

     经过几个阶段的重构,代码不仅“瘦身”了,而且逻辑更清晰了。通过这样一个从硬编码到模型到规律的过程,大家看到的应该不仅仅是不断改进的代码,而且还会感觉到重构的迭代性和无止境吧!任何的设计和实现都只能是在某种情境和阶段是合理的,而不存在一个永远完美的答案。

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
Hardware Video Encoding on iPhone — RTSP Server example On iOS, the only way to use hardware acceleration when encoding video is to use AVAssetWriter, and that means writing the compressed video to file. If you want to stream that video over the network, for example, it needs to be read back out of the file. I’ve written an example application that demonstrates how to do this, as part of an RTSP server that streams H264 video from the iPhone or iPad camera to remote clients. The end-to-end latency, measured using a low-latency DirectShow client, is under a second. Latency with VLC and QuickTime playback is a few seconds, since these clients buffer somewhat more data at the client side. The whole example app is available in source form here under an attribution license. It’s a very basic app, but is fully functional. Build and run the app on an iPhone or iPad, then use Quicktime Player or VLC to play back the URL that is displayed in the app. Details, Details When the compressed video data is written to a MOV or MP4 file, it is written to an mdat atom and indexed in the moov atom. However, the moov atom is not written out until the file is closed, and without that index, the data in mdat is not easily accessible. There are no boundary markers or sub-atoms, just raw elementary stream. Moreover, the data in the mdat cannot be extracted or used without the data from the moov atom (specifically the lengthSize and SPS and PPS param sets). My example code takes the following approach to this problem: Only video is written using the AVAssetWriter instance, or it would be impossible to distinguish video from audio in the mdat atom. Initially, I create two AVAssetWriter instances. The first frame is written to both, and then one instance is closed. Once the moov atom has been written to that file, I parse the file and assume that the parameters apply to both instances, since the initial conditions were the same. Once I have the parameters, I use a dispatch_source object to trigger reads from the file whenever new data is written. The body of the mdat chunk consists of H264 NALUs, each preceded by a length field. Although the length of the mdat chunk is not known, we can safely assume that it will continue to the end of the file (until we finish the output file and the moov is added). For RTP delivery of the data, we group the NALUs into frames by parsing the NALU headers. Since there are no AUDs marking the frame boundaries, this requires looking at several different elements of the NALU header. Timestamps arrive with the uncompressed frames from the camera and are stored in a FIFO. These timestamps are applied to the compressed frames in the same order. Fortunately, the AVAssetWriter live encoder does not require re-ordering of frames. When the file gets too large, a new instance of AVAssetWriter is used, so that the old temporary file can be deleted. Transition code must then wait for the old instance to be closed so that the remaining NALUs can be read from the mdat atom without reading past the end of that atom into the subsequent metadata. Finally, the new file is opened and timestamps are adjusted. The resulting compressed output is seamless. A little experimentation suggests that we are able to read compressed frames from file about 500ms or so after they are captured, and these frames then arrive around 200ms after that at the client app. Rotation For modern graphics hardware, it is very straightforward to rotate an image when displaying it, and this is the method used by AVFoundation to handle rotation of the camera. The buffers are captured, encoded and written to file in landscape orientation. If the device is rotated to portrait mode, a transform matrix is written out to the file to indicate that the video should be rotated for playback. At the same time, the preview layer is also rotated to match the device orientation. This is efficient and works in most cases. However, there isn’t a way to pass this transform matrix to an RTP client, so the view on a remote player will not match the preview on the device if it is rotated away from the base camera orientation. The solution is to rotate the pixel buffers after receiving them from the capture output and before delivering them to the encoder. There is a cost to this processing, and this example code does not include this extra step.

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值