From: "Zebediah Figura (she/her)" Subject: Re: [PATCH vkd3d] vkd3d-shader/hlsl: Support casts between all numeric types on constant folding. Message-Id: Date: Thu, 30 Dec 2021 16:03:05 -0600 In-Reply-To: <5700bdee7b108779d2c5d0123fea6763@codeweavers.com> References: <658b9e98-ef2e-d928-8bb0-5da1664b92fc@codeweavers.com> <20211229145154.123397-1-fcasas@codeweavers.com> <5700bdee7b108779d2c5d0123fea6763@codeweavers.com> On 12/30/21 15:55, Francisco Casas wrote: > I agree. And if we do that, maybe we can do functions like these: > >> /* This macro is just to illustrate the similarity among the functions */ >> #define CONSTANT_OP2_FUNCTION(function_name,operator) \ >> void function_name(struct hlsl_ir_constant *res, struct hlsl_ir_constant *arg1, \ >> struct hlsl_ir_constant *arg2, enum hlsl_base_type type) \ >> { \ >> for(int k=0; k<4; k++) \ >> { \ >> switch (type) \ >> { \ >> case HLSL_TYPE_FLOAT: \ >> case HLSL_TYPE_HALF: \ >> res->value[k].f = arg1->value[k].f operator arg2->value[k].f; \ >> break; \ >> case HLSL_TYPE_DOUBLE: \ >> res->value[k].d = arg1->value[k].d operator arg2->value[k].d; \ >> break; \ >> case HLSL_TYPE_INT: \ >> res->value[k].i = arg1->value[k].i operator arg2->value[k].i; \ >> break; \ >> case HLSL_TYPE_UINT: \ >> res->value[k].u = arg1->value[k].u operator arg2->value[k].u; \ >> break; \ >> case HLSL_TYPE_BOOL: \ >> res->value[k].u = !!((!!(arg1->value[k].u)) operator (!!(arg2->value[k].u))) * 0xffffffff; \ >> break; \ >> default: \ >> assert(0); \ >> break; \ >> } \ >> } \ >> } >> >> CONSTANT_OP2_FUNCTION(constant_value_sum,+) >> CONSTANT_OP2_FUNCTION(constant_value_sub,-) >> CONSTANT_OP2_FUNCTION(constant_value_mult,+) >> CONSTANT_OP2_FUNCTION(constant_value_neg,* (-1) + 0 *) /* horrid? */ >> CONSTANT_OP2_FUNCTION(constant_value_div,/) /* horrid? */ > > And call one of these on each case of the switch. > > As using macros this way is too ugly, I think we should consider creating > a new "constant_ops.c" file to define all these boilerplate functions for > constant values. They may get even larger if we support matrices. Maybe. I'd prefer to avoid preprocessor macros, though, and defer any sort of code generation until it becomes necessary. > >>> + { >>> + if (instr->data_type->dimx != arg1->node.data_type->dimx >>> + || instr->data_type->dimy != arg1->node.data_type->dimy) >>> + { >>> + WARN("Cast from %s to %s.\n", debug_hlsl_type(ctx, arg1->node.data_type), >>> + debug_hlsl_type(ctx, instr->data_type)); >> >> Why remove the "return false" from this? Also, why change it from a FIXME? >> > > Casting to a vector type with a smaller dimension works in the native compiler, > albeit with a warning. Since hlsl_cast_constant_value() copies all 4 values even > if they are not used, these cases should be covered now. > However, yes, casting to a type with a larger dimension should result in an error, > unless it is from a scalar. > I recognize I didn't think it too much, I will handle these cases better. Sure, but we should handle that elsewhere. In fact we already have lower_narrowing_casts() for this. > >>> + } >>> + memcpy(res->value, arg1->value, sizeof(res->value)); >> >> What's the point of copying to the value if we're just going to overwrite it? >> > > hlsl_cast_constant_value() expects the original value to be in the same constant > on which the result is written, so it has to be copied on the new node first. > The alternative would be receiving both a source and target hlsl_ir_constant... > maybe it is better that way. Eh, indeed, I skimmed and misread it. I'm inclined to specify source and target separately, yes.