Adding more nodes in a custom Layer

I have a custom layer. If there are n inputs, I want 2n outputs. The 2nd “n” outputs will be set programatically in the custom layer’s call function, according to specific values in the first n outputs. For example:

If there are 10 inputs, then this layer will have 20 outputs, so for x=0 to 9, output[x+10] = f(output)

My issue is that I can’t seem to change the output tensor’s dimensions. Will someone throw me a bone?

def call(self, inputs):

    batch_size = inputs.shape[0]

    # "Call" is being called on compile, I think, so batch_size is None. I'll figure that out later
    if not batch_size:
        batch_size = 1

    # Make as many one-hot nodes as there are inputs, as each input will receive a categorization
    one_hot_size = self.units

    # This is what I'd like to merge to each input.  Layer on, I'd update these values
    c = tf.constant([0.0] * (one_hot_size * batch_size), shape=(batch_size, one_hot_size))

    # Perform the basic NN operation
    base_out = tf.tensordot(inputs, self.weight, axes = 1) + self.bias

    # Now attempt to merge the base_out tensor and the c tensor.
    result = tf.concat((base_out, c), 1)

    return result

I get this error:

InvalidArgumentError: Shape must be rank 1 but is rank 2 for ‘{{node unpack_and__categorize_109/concat}} = ConcatV2[N=2, T=DT_FLOAT, Tidx=DT_INT32](unpack_and__categorize_109/add, unpack_and__categorize_109/Const, unpack_and__categorize_109/concat/axis)’ with input shapes: [10], [1,10], .

I understand the words well enough but I have looked at this so long that I can’t see anything anymore. I suspect my entire approach is wrong.

Hi @Tony_Ennis,

While concatenating tensors, they must have the same rank. In this case, the error occurs because one tensor has a shape of [10], while the other has a shape of [1, 10]. To resolve this issue, you can reshape the rank 1 tensor to match the rank of the other tensor, or you can use squeeze to reduce the dimensions of the rank 2 tensor, or you can match the dimensions.

Kindly refer this updated code.

Hope this helps.Thank You.