Conversation
|
I didn't look yet, but it'd be useful anyways. Could you explain why this is a breaking change in the description? And, I suppose we need to add smth like BREAKING CHANGE (?) somewhere |
493638b to
773dc2f
Compare
Done. The exclamation mark serves the purpose of conveying breaking changes. |
Are we adding a feature here, or are we fixing something? |
I have thought a little about it, and I'm not sure what to call it. I'm hesitant to call it a fix because we're not fixing broken functionality (bug) -- we are adding something that simply didn't exist before. On the other hand it's also a bit of a stretch to call it a feature because the validation doesn't bring any new usable functionality. But I do lean more towards feat because it's still something new that is added. |
Here is my perspective, let me know what you think: I think we're fixing a bug because in the validate() calls, we should have been returning And, this is not a feature because we're not adding any new functionality. We're merely fixing a bug and possibly limiting our support set as a result of this conservative checks. |
Yes it also makes sense. I think my reasoning was that if the library is used as intended, will something then go wrong? From that perspective, no, it's not a bug. But if |
773dc2f to
f6ceb31
Compare
gunes-arm
left a comment
There was a problem hiding this comment.
I've only been able to check until NEGather. I'll continue.
| @@ -44,6 +45,7 @@ bool CPPUpsampleKernel::is_parallelisable() const | |||
| void CPPUpsampleKernel::configure(const ITensor *input, ITensor *output, const PadStrideInfo &info) | |||
There was a problem hiding this comment.
We should add validate() calls to some kernels, especially in CPU and call their validate functions from the callers. But, I think it should be a separate ticket. Can you create a list of kernels, such as this one, that doesn't have validate() function and create a ticket for this?
The size check implies a tensor size restriction to 2^31-1 bytes. Kernel configurations larger than that will no longer validate. Resolves: COMPMID-8697 Signed-off-by: Andreas Flöjt <andreas.floejt@arm.com> Change-Id: I54f73ade5cb4a0d34d831505d83d1d7ef526b5db
f6ceb31 to
b8c4667
Compare
| ARM_COMPUTE_RETURN_ERROR_ON_MISMATCHING_DATA_TYPES(input1, output); | ||
| } | ||
| // There's no default configuration, so we expect that output is initialized. | ||
| ARM_COMPUTE_ERROR_ON(output->total_size() == 0); |
There was a problem hiding this comment.
As before, we shouldn't throw, we should return a Status.
| ARM_COMPUTE_RETURN_ERROR_ON_MSG((input->tensor_shape()[idx_height] % stride) != 0, | ||
| "The height of the input tensor must be a multiple of stride"); | ||
|
|
||
| TensorShape output_shape = misc::shape_calculator::compute_reorg_output_shape(*input, stride); |
| } | ||
| else | ||
| { | ||
| const TensorShape output_shape = misc::shape_calculator::compute_space_to_depth_shape(input, block_shape); |
There was a problem hiding this comment.
I think it'd be good to check this in if part, too.
| @@ -1,5 +1,5 @@ | |||
| /* | |||
| * Copyright (c) 2023, 2025 Arm Limited. | |||
| * Copyright (c) 2023, 2025, 2026 Arm Limited. | |||
| { | ||
| ARM_COMPUTE_TRACE_EVENT(ARM_COMPUTE_PROF_CAT_CPU, ARM_COMPUTE_PROF_LVL_CPU, "CpuDynamicGemmKernel::validate"); | ||
| ARM_COMPUTE_RETURN_ERROR_ON_NULLPTR(a, b, c, d); | ||
| ARM_COMPUTE_RETURN_ERROR_ON_SIZE_UNSUPPORTED(a, b, c, d); |
There was a problem hiding this comment.
These shapes can be dynamic, so it shouldn't be checked here.
| ARM_COMPUTE_ERROR("ElementWiseUnary operation not supported"); | ||
| } | ||
|
|
||
| const auto output_shape = TensorShape::broadcast_shape(src.tensor_shape()); |
There was a problem hiding this comment.
This is a unary operator, so output shape will be equal to src shape. This is doing the same thing I suppose.
Also, we should validate the shapes only when tensor is not dynamic shape.
| if (indices->total_size() != 0) | ||
| { | ||
| TensorInfo idx_info(TensorInfo(compute_pool_shape(*src, pool_info), 1, DataType::U32)); | ||
| const auto idx_info = TensorInfo(TensorInfo(compute_pool_shape(*src, pool_info), 1, DataType::U32)); |
There was a problem hiding this comment.
Why do we need nested constructors here?
| TensorInfo temp_info; | ||
| ARM_COMPUTE_RETURN_ON_ERROR(CLCrop::validate(input->clone().get(), &temp_info, {0, 0}, {1, 1}, | ||
| auto temp_info = output->clone(); | ||
| temp_info->set_tensor_shape(TensorShape(input->dimension(0), crop_size.x, crop_size.y)); |
The size check implies a tensor size restriction to 2^31-1 bytes. Kernel
configurations larger than that will no longer validate.
Resolves: COMPMID-8697
Change-Id: I54f73ade5cb4a0d34d831505d83d1d7ef526b5db