imageIO completes progressive loading of images
1. Common progressive loading image modes
The progressive load we are seeing currently has the following three implementation methods:
1) Load pictures of different sizes from the web in turn, from small to large. At the beginning, first pull a small thumbnail for stretching display, then pull a medium-sized picture, directly overwrite the display after pulling, and finally pull the original picture, and display the original picture after pulling.
2) Pull the largest image directly from the web, and display a little image for every little data you receive, so that you can refresh it from top to bottom.
3) Combining the first and second types, first pull a thumbnail for stretching display, and then use the second method to directly pull the original image. This can achieve gradual loading and save several intermediate network requests.
2. Implement gradual loading of images through imageIO
The imageIO guide says this: "If you have a very large image, or are loading image data over the web, you may want to create an incremental image source so that you can draw the image data as you accumulate it."
Translated: "If you want to load a particularly large picture, or load a picture from the network, you can achieve gradual loading by creating an imageSource." The translation is not very authentic, probably that's what it means. When doing PowerCam, I tried this method when I was working on super large pictures on iOS. At that time, the test used a map of China with a resolution of 10,000*8,000. As a result, when the whole picture was loaded into memory, the memory could not bear it, so I gave up. Now think about the processing of this super-large picture, we can use the method of sharding, and only need to process a small piece of picture at a time. Let's leave this question for everyone to think about.
What we are going to discuss today is that CGImageSource implements the progressive loading of images from the web side. To achieve this goal, we need to create a URLConnnection and then implement a proxy to update the image every time the data is received. The following main implementation source codes:
//
//
// SvIncrementallyImage
//
// Created by maple on 6/27/13.
// Copyright (c) 2013 maple. All rights reserved.
//
#import ""
#import <ImageIO/>
#import <CoreFoundation/>
@interface SvIncrementallyImage () {
NSURLRequest *_request;
NSURLConnection *_conn;
CGImageSourceRef _incrementallyImgSource;
NSMutableData *_recieveData;
long long _expectedLeght;
bool _isLoadFinished;
}
@property (nonatomic, retain) UIImage *image;
@property (nonatomic, retain) UIImage *thumbImage;
@end
@implementation SvIncrementallyImage
@synthesize imageURL = _imageURL;
@synthesize image = _image;
@synthesize thumbImage = _thumbImage;
- (id)initWithURL:(NSURL *)imageURL
{
self = [super init];
if (self) {
_imageURL = [imageURL retain];
_request = [[NSURLRequest alloc] initWithURL:_imageURL];
_conn = [[NSURLConnection alloc] initWithRequest:_request delegate:self];
_incrementallyImgSource = CGImageSourceCreateIncremental(NULL);
_recieveData = [[NSMutableData alloc] init];
_isLoadFinished = false;
}
return self;
}
#pragma mark -
#pragma mark NSURLConnectionDataDelegate
- (void)connection:(NSURLConnection *)connection didReceiveResponse:(NSURLResponse *)response
{
_expectedLeght = ;
NSLog(@"expected Length: %lld", _expectedLeght);
NSString *mimeType = ;
NSLog(@"MIME TYPE %@", mimeType);
NSArray *arr = [mimeType componentsSeparatedByString:@"/"];
if ( < 1 || ![[arr objectAtIndex:0] isEqual:@"image"]) {
NSLog(@"not a image url");
[connection cancel];
[_conn release]; _conn = nil;
}
}
- (void)connection:(NSURLConnection *)connection didFailWithError:(NSError *)error
{
NSLog(@"Connection %@ error, error info: %@", connection, error);
}
- (void)connectionDidFinishLoading:(NSURLConnection *)connection
{
NSLog(@"Connection Loading Finished!!!");
// if download image data not complete, create final image
if (!_isLoadFinished) {
CGImageSourceUpdateData(_incrementallyImgSource, (CFDataRef)_recieveData, _isLoadFinished);
CGImageRef imageRef = CGImageSourceCreateImageAtIndex(_incrementallyImgSource, 0, NULL);
= [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
}
}
- (void)connection:(NSURLConnection *)connection didReceiveData:(NSData *)data
{
[_recieveData appendData:data];
_isLoadFinished = false;
if (_expectedLeght == _recieveData.length) {
_isLoadFinished = true;
}
CGImageSourceUpdateData(_incrementallyImgSource, (CFDataRef)_recieveData, _isLoadFinished);
CGImageRef imageRef = CGImageSourceCreateImageAtIndex(_incrementallyImgSource, 0, NULL);
= [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
}
@end
From the above code, we can see that at the beginning we create a URLConnection based on the incoming URL, and at the same time create an empty CGImageSource. Then, whenever we receive data, we call CGImageSourceUpdateData to update the imageSource data, and then call CGImageSourceCreateImageAtIndex to get the latest image.
How about it? Do you think that the implementation above is simple to gradually load images from the web? Although imageIO has done a lot of things for us, we should also understand its principles. We know that files are in formats. Generally, the head of the file will record some data about the file format, followed by the actual file data.
Take the simplest BMP image file as an example:
1) The initial BITMAPFILEHEADER mainly records the size of the file and the distance between the actual image data from the file header.
2) Next is BITMAPINFOHEADER, which mainly records the width, height, position depth and other information of the picture
3) Optional color palette information
4) The last part is the actual picture data.
The information in the first three parts is very small, and generally does not exceed 100 bytes. After obtaining this writing information, we can easily build images based on the subsequent data. When the data is obtained more and more complete, the images we construct will become more complete until all loads are completed.
The BMP format is a simple picture format. Although the results of other JPGs and PNGs are more complex, the overall composition is similar. imageIO helps us to code and code many image formats, and then construct the final image step by step.
Use imageIO to get exif information of an image
In addition to containing pixel information we can see, a picture also contains information such as shooting time, aperture size, exposure, etc. The UIImage class hides all these details and only provides the image size, image direction, etc. that we care about. We can obtain all the information behind the image through the imageIO framework, let’s take a look together below.
The imageIO framework is a framework that is a little bit underlying in iOS. The interfaces provided internally are all C-style, and the key data are stored using CoreFoundation. Fortunately, there are many data types in CoreFoundation that can be seamlessly bridged in the data types in the upper-level data Foundation framework. This greatly facilitates our operation of image information.
CGImageSourceRef is the entrance to the entire imageIO, through which we can complete the loading of images from the file. After loading, we get a CGImageSourceRef. Through CGImageSourceRef, we can get the size of the image file. UTI (uniform type identifier), which contains several pictures, access each picture and obtain the exif information corresponding to each picture, etc.
You may have a question, why are there several pictures?
Let me explain this: imageSourceRef and file correspond one by one. Usually, the image files we see (such as jpg, png) have only one image inside. In this case, we get 1 through the CGImageSourceGetCount method. However, it cannot be ruled out that there will be multiple images in an image file, such as a gif file. At this time, a file may contain several or even dozens of images. A blog I wrote earlier, "How to parse and display Gifs in IOS", is to load and parse gifs through imageSource.
Below is the exif information of the photos taken by the system camera:
image property: {
ColorModel = RGB;
DPIHeight = 72;
DPIWidth = 72;
Depth = 8;
Orientation = 6;
PixelHeight = 2448;
PixelWidth = 3264;
"{Exif}" = {
ApertureValue = "2.526069";
BrightnessValue = "-0.5140446";
ColorSpace = 1;
ComponentsConfiguration = (
1,
2,
3,
0
);
DateTimeDigitized = "2013:06:24 22:11:30";
DateTimeOriginal = "2013:06:24 22:11:30";
ExifVersion = (
2,
2,
1
);
ExposureMode = 0;
ExposureProgram = 2;
ExposureTime = "0.06666667";
FNumber = "2.4";
Flash = 16;
FlashPixVersion = (
1,
0
);
FocalLenIn35mmFilm = 33;
FocalLength = "4.13";
ISOSpeedRatings = (
400
);
MeteringMode = 3;
PixelXDimension = 3264;
PixelYDimension = 2448;
SceneCaptureType = 0;
SensingMethod = 2;
ShutterSpeedValue = "3.906905";
SubjectArea = (
2815,
1187,
610,
612
);
WhiteBalance = 0;
};
"{GPS}" = {
Altitude = "27.77328";
AltitudeRef = 0;
Latitude = "22.5645";
LatitudeRef = N;
Longitude = "113.8886666666667";
LongitudeRef = E;
TimeStamp = "14:11:23.36";
};
"{TIFF}" = {
DateTime = "2013:06:24 22:11:30";
Make = Apple;
Model = "iPhone 5";
Orientation = 6;
ResolutionUnit = 2;
Software = "6.1.4";
XResolution = 72;
YResolution = 72;
"_YCbCrPositioning" = 1;
};
}
From this we can see that the first few items show the color mode of the current picture, the color depth, the DPI in the x and y directions, the actual pixels and the direction of the picture. When I first saw this direction, I was delighted that this was not the imageOrientation in UIImage, but the experiment found that this direction is not the same as the imageOrientation in UIImage. The direction here is the direction defined by the exif standard. From 1 to 8, it corresponds to the 8 directions in UIImage, but the order is different, and their correspondence is as follows:
enum {
exifOrientationUp = 1, // UIImageOrientationUp
exifOrientationDown = 3, // UIImageOrientationDown
exifOrientationLeft = 6, // UIImageOrientationLeft
exifOrientationRight = 8, // UIImageOrientationRight
// these four exifOrientation does not support by all camera, but IOS support these orientation
exifOrientationUpMirrored = 2, // UIImageOrientationUpMirrored
exifOrientationDownMirrored = 4, // UIImageOrientationDownMirrored
exifOrientationLeftMirrored = 5, // UIImageOrientationLeftMirrored
exifOrientationRightMirrored = 7, // UIImageOrientationRightMirrored
};
typedef NSInteger ExifOrientation;
Currently, most digital cameras and mobile phones on the market have a built-in direction sensor, and the photos taken will be written as direction information, but usually there are only the first four directions. These Mirrored directions are usually set when taking selfies with the front camera of the phone.
Why do exif need to do such a direction?
Almost all cameras have a directional shape when they appear, and the pixels of the photos taken are in the default direction. If these pixels are rotated every time a photo is taken, if the digital camera takes 20 consecutive shots per second, the rotation operation will be very time-consuming. A smarter way is to only record one direction when taking a photo, and then display it in the direction when it is displayed. Therefore, exif defines a standard orientation parameter. As long as the software that reads the picture is followed by the rules, read the image direction during loading, and then rotate it accordingly. This can achieve both the purpose of rapid imaging and the correct display, so why not do it?
Common image browsing and editing software adhere to this rule, but there is one of the most commonly used image viewing software (the image viewing program that comes with Windows) that will not read this direction. Therefore, when we import the pictures taken by digital cameras and mobile phones into Windows, we often encounter wrong direction problems. I don’t know what the Windows Empire thought, maybe there is a holiday with the organization that defines Exif.
In addition to the ones mentioned above, there are also GPS information for shooting. The location tab in the album software that comes with iOS is implemented according to GPS information. There are many other information. Those who are interested can write a program to study it themselves, so I won’t go into it here.