Layer 7 is just the last convolutional layer of VGG16.
Layer 8 would be a fully connected/dense layer, but instead is replaced with a 1×1 convolution which, if input to this layer would be of size 1×1, is the same thing. Otherwise its as if you mapped each pixel through the same dense layer.
If your input size is such that input to layer 8 is of size 1×1, then there is no difference. If its bigger, this still potentially allows the network to work in a meaningful way.
CLICK HERE to find out more related problems solutions.