My opinion is no. I felt that AZ's primary publication was high quality: reporting the planned analyses, presenting a very important unplanned analysis, explaining the unexpected results of that analysis, and claiming that more time and data will be needed to confirm any findings.
The question boils down to intent-to-treat versus per-protocol analysis. Every trial has unexplained deviations, and sometimes it even affects manufacturing so that a large fraction of trial participants are affected by a non-randomized condition. This can be so severe that the planned analyses need to be reframed to report any generalizable result. Unfortunately for AZ who seem to have a fairly good vaccine, all of the above happened with their ChAdOx trial. This cost them vital time and "vitaler" money in a highly competitive environment.
Deviations and unexpected findings can at times generate hypotheses that are so striking they need to be evaluated in subsequent studies. But the operative word is "subsequent": you can't use the same data that generated a hypothesis to confirm the hypothesis. Minoxidil, for instance, was a vasodilator piloted as a heart medication but unintentionally was found to cause hair grown in those with male pattern baldness. They conducted a secondary study and showed a statistically significant effect. For AZ, if they truly believe the low dose or "priming low dose" was effective, they need a major protocol amendment, and an analysis plan that excludes results from the prior analyses; a very costly endeavor. I recall this was investigated at length, the priming dose hypothesis was ultimately rejected. Now ChAdOx has emergency use authorization from WHO and is being used globally (outside the US).
In general, when reporting out the results of a clinical trial, it is better to verge on the side of full disclosure. According to my reading of the primary publication, they report correctly the intent-to-treat (so called "average" effectiveness) of 70%. The "per protocol" effectiveness of the AZ drug was also reported (among those who received both full doses) as 62·1% (95% CI 41·0–75.4) which is still a promising result. They correctly report the results of a rather serious post-hoc finding: that people receiving an "accidental" dose had statistically better efficacy that those receiving the planned dose.
In participants who received two standard doses, vaccine efficacy was 62·1% (95% CI 41·0–75·7; 27 [0·6%] of 4440 in the ChAdOx1 nCoV-19 group vs71 [1·6%] of 4455 in the control group) and in participants who received a low dose followed by a standard dose, efficacy was 90·0% (67·4–97·0; three [0·2%] of 1367 vs 30 [2·2%] of 1374; pinteraction=0·010). Overall vaccine efficacy across both groups was 70·4% (95·8% CI 54·8–80·6; 30 [0·5%] of 5807 vs 101 [1·7%] of 5829).
The article goes farther to state that the results are unexpected, that they performed other subgroup analysis to see if possible confounding could explain the issue, and no statistical test identified a difference. They clarify further that the results, while promising, will require later read out of the data and results from other studies to understand any difference if not due to chance.
The issue had to do with manufacturing. The primary publication is open access and you can read it for yourself: https://www.thelancet.com/journals/lancet/article/PIIS0140-6736(20)32661-1/fulltext#tbl2 (open access). The low dosing issue was one of lack of oversight from AZ.
Two dosage groups were included in COV002: participants who received a
low dose of the vaccine (2·2 × 1010 viral particles) as their first
dose and were boosted with a standard dose (in the LD/SD group), and
subsequent cohorts who were vaccinated with two standard-dose vaccines
(SD/SD group). Initial dosing in COV002 was with a batch manufactured
at a contract manufacturing organisation using chromatographic
purification. During quality control of this second batch, differences
were observed between the quantification methods (spectrophotometry
and quantitative PCR [qPCR]) prioritised by different manufacturing
sites. In consultation with the national regulator (Medicines and
Healthcare products Regulatory Agency), we selected a dose of 5 × 1010
viral particles by spectrophotometer (2·2 × 1010 viral particles by
qPCR), in order to be consistent with the use of spectrophotometry in
the phase 1 study (COV001),5 and to ensure the dose was within a safe
and immunogenic range according to measurements by both methods. A
lower-than-anticipated reactogenicity profile was noted in the trial,
and unexpected interference of an excipient with the spectrophotometry
assay was identified. After review and approval by the regulator, it
was concluded that the qPCR (low-dose) reading was more accurate and
further doses were adjusted to the standard dose (5 × 1010 viral
particles) using a qPCR assay. The protocol was amended on June 5,
2020, resulting in enrolment of two distinct groups with different
dosing regimens with no pause in enrolment (version 6.0; appendix 2 p
330). A suite of assays has now been developed for characterisation of
concentration (which confirmed the low and standard dosing), and
future batches are all released with a specification dose of
3·5–6·5 × 1010 viral particles, and this was used for the booster
doses in the efficacy analysis presented here.