In image processing, wavelet transform (WT) offers multiscale image decomposition, generating a blend of low-resolution approximation images and high-resolution detail components. Drawing parallels to this concept, we view feature maps in convolutional neural networks (CNNs) as a similar mix, but uniquely within the channel domain. Inspired by multitask learning (MTL) principles, we propose a wavelet-based dual-task (WDT) framework. This novel framework employs WT in the channel domain to split a single task into two parallel tasks, thereby reforming traditional single-task CNNs into dynamic dual-task networks. Our WDT framework integrates seamlessly with various popular network architectures, enhancing their versatility and efficiency. It offers a more rational approach to resource allocation in CNNs, balancing between low-frequency and high-frequency information. Rigorous experiments on Cifar10, ImageNet, HMDB51, and UCF101 validate our approach's effectiveness. Results reveal significant improvements in the performance of traditional CNNs on classification tasks, and notably, these enhancements are achieved with fewer parameters and computations. In summary, our work presents a pioneering step toward redefining the performance and efficiency of CNN-based tasks through WT.