In this work, a deterministic sequence suitable for approximate computing on stochastic computing hardware is proposed and its effectiveness in achieving high accuracies with relatively short sequence lengths is studied for convolutional neural networks. It is shown that in the range of interest for neural network computations, multiplication errors can be lower than quantization errors with this approach. The sequence lengths required for achieving accuracies within ~0.5% of the floating-point baseline are of the order of 16 and 32 for CIFAR10 classification with VGG16 and ResNet20 networks, respectively, when all convolutions and matrix multiplications are performed using the proposed sequence. For ImageNet classification, the sequence lengths required for accuracies within ~1% of the floating-point baseline are of the order of 32 for MobileNetV1 and ResNet50 networks. This work suggests that stochastic computing hardware and approaches may be feasible for approximate neural network computations with higher accuracies, lower latencies and/or larger networks than previously reported.