Kernel methods have been shown to be effective for many machine learning tasks such as classification and regression. In particular, support vector machines with the Gaussian kernel have proved to be powerful classification tools. The standard way to apply kernel methods is to use the kernel trick, where the inner product of the vectors in the feature space is computed via the kernel function. Using the kernel trick for SVMs, however, leads to training that is quadratic in the number of input vectors and classification that is linear with the number of support vectors. We introduce a new kernel, the CRO (Concomitant Rank Order) kernel that approximates the Gaussian kernel on the unit sphere. We also introduce a randomized feature map, called the CRO feature map that produces sparse, high-dimensional feature vectors whose inner product asymptotically equals the CRO kernel. Using the Discrete Cosine Transform for computing the CRO feature map ensures that the cost of computing feature vectors is low, allowing us to compute the feature map explicitly. Combining the CRO feature map with linear SVM we introduce the CROification algorithm which gives us the efficiency of a sparse high-dimensional linear SVM with the accuracy of the Gaussian kernel SVM.