Actually, the legs are growing from that thread on OTN forum. A lot of solutions were given, and for sure I would use one of them on the OP's place, but I just decided to show how It can be done using collections.
Problem description:
Input data (from the forum):
"ok I have 4 columns returned in my query which can either have a value of 'FF' or null".
So we have a table:
SQL> create table t as
2 (select 'FF' as col1, null as col2, null as col3, null as col4 from dual union all
3 select 'FF', 'FF', 'FF', null from dual union all
4 select 'FF', 'FF', null, 'FF' from dual union all
5 select 'FF', null, 'FF', null from dual)
6 /
Table created.
SQL> select * from t;
COL1 COL2 COL3 COL4
---- ---- ---- ----
FF
FF FF FF
FF FF FF
FF FF
SQL>
Goal:
"I want to create a fifth column with a calculation which basically states..." how many fields have 'FF' value in each row. Or it can be formulated as: "how many columns have non-null values" (it's not the same, but when the possible values domain consists of only 'FF' or NULL - it is quite similar).
Expected output:
COL1 COL2 COL3 COL4 COL5
---------- ---------- ---------- ---------- ----------
FF NULL NULL NULL 1
FF FF FF NULL 3
FF FF NULL FF 3
FF NULL FF NULL 2
Solution:
First of all let's create a collection consisting of varchar elements:
SQL> create type test as table of varchar2(100);
2 /
Type created.
SQL>
Then we execute a query which would return us a collection in the fifth field, where all the values from the previous four columns would be stored:
SQL> column col1 format A10;
SQL> column col2 format A10;
SQL> column col3 format A10;
SQL> column col4 format A10;
SQL> column coll format A30;
SQL> set null 'NULL';
SQL> select col1, col2, col3, col4,
2 cast(multiset(
3 select decode(level, 1, col1, 2, col2, 3, col3, 4, col4) from dual
4 connect by level <= 4) as test) coll
5 from t
6 /
COL1 COL2 COL3 COL4 COLL
---------- ---------- ---------- ---------- ------------------------------
FF NULL NULL NULL TEST('FF', NULL, NULL, NULL)
FF FF FF NULL TEST('FF', 'FF', 'FF', NULL)
FF FF NULL FF TEST('FF', 'FF', NULL, 'FF')
FF NULL FF NULL TEST('FF', NULL, 'FF', NULL)
SQL>
Now we look for a new operator MULTISET INTERSECT.
So the main idea of our solution is to get rid of needless elements in the initial collection and then count how many elements are left there.
So if we intersect our collection with a collection storing only 'FF' elements - we'll get rid of redundant NULL elements.
Let's do it:
SQL> select col1,col2,col3,col4,
2 cast(multiset(
3 select decode(level,1,col1,2,col2,3,col3,4,col4) from dual
4 connect by level<=4) as test)
5 multiset intersect all
6 cast(multiset (select 'FF' from dual connect by level <= 4) as test) coll from t
7 /
COL1 COL2 COL3 COL4 COLL
---------- ---------- ---------- ---------- ------------------------------
FF NULL NULL NULL TEST('FF')
FF FF FF NULL TEST('FF', 'FF', 'FF')
FF FF NULL FF TEST('FF', 'FF', 'FF')
FF NULL FF NULL TEST('FF', 'FF')
SQL>
I use ALL key word explicitly (although it is default value) to distinguish that we don't want to loose values wich are duplicated ('FF' in our case).
Another important point here is making the second collection consist of not less elements than the first does (4 elements in our case - I use simple connect by level<=4), because if you leave only one 'FF' element in the second collection - you'll get also only one element in the resulting collection:
SQL> select col1, col2, col3, col4,
2 cast(multiset(select decode(level,1,col1,2,col2,3,col3,4,col4) from dual
3 connect by level<=4) as test)
4 multiset intersect all
5 cast(multiset (select 'FF' from dual) as test) coll from t
6 /
COL1 COL2 COL3 COL4 COLL
---------- ---------- ---------- ---------- ------------------------------
FF NULL NULL NULL TEST('FF')
FF FF FF NULL TEST('FF')
FF FF NULL FF TEST('FF')
FF NULL FF NULL TEST('FF')
SQL>
It is described in the documentation:
For example, if a particular value occurs m times in nested_table1 and n times in nested_table2, then the result would contain the element min(m,n) times.
Ok, now the last our step - is counting how many elements are there in the resulting collection. It can easily be done with a CARDINALITY function.
So the final query would be:
SQL> select col1,col2,col3,col4,
2 cardinality(cast(multiset(
3 select decode(level,1,col1,2,col2,3,col3,4,col4) from dual
4 connect by level<=4) as test)
5 multiset intersect all
6 cast(multiset (select 'FF' from dual connect by level <= 4) as test)) coll from t
7 /
COL1 COL2 COL3 COL4 COLL
---------- ---------- ---------- ---------- ----------
FF NULL NULL NULL 1
FF FF FF NULL 3
FF FF NULL FF 3
FF NULL FF NULL 2
SQL>
Alternative solution would be using as a second collection - a collection with null elements, then intersect them both, and counting the column value as 4 - number_of_null_columns, where 4 is total number of fields in our row.
But we can use here another new multiset operator MULTISET EXCEPT:
SQL> select col1, col2, col3, col4,
2 cardinality(cast(multiset(select decode(level, 1, col1, 2, col2, 3, col3, 4, col4) from dual
3 connect by level <= 4) as test)
4 multiset except all
5 cast(multiset(select null from dual connect by level<=4) as test)) coll from t
6 /
COL1 COL2 COL3 COL4 COLL
---------- ---------- ---------- ---------- ----------
FF NULL NULL NULL 1
FF FF FF NULL 3
FF FF NULL FF 3
FF NULL FF NULL 2
SQL>
It is returning all elements from the first collection which are not met in the second one.
PS
strange for me was the following problem:
SQL> select col1, col2, col3, col4,
2 cardinality(cast(multiset(select col1 from dual union all
3 select col2 from dual union all
4 select col3 from dual union all
5 select col4 from dual) as test)
6 multiset intersect all
7 cast(multiset (select 'FF' from dual connect by level <= 4) as test)) coll from t
8 /
...
ORA-03001: unimplemented feature
SQL>
So we can't pass a query with UNION ALL to the CAST(MULTISET(...) as ...) function. But I didn't find such a kind of limitation in the doc.